[
  {
    "path": ".gitignore",
    "content": "solutions/\n.DS_Store\n.vscode/\nanswers.md\n"
  },
  {
    "path": "README.md",
    "content": "# Essentials of Computer Vision  \n\n![](assets/blurb.png)\n\nA math-first approach to learning computer vision in Python. The repository will contain all HTML, PDF, Markdown, Python Scripts, data, and media assets (images or links to supplementary videos). If you wish to contribute, I need translations for Bahasa Indonesia. Please submit a Pull Request.\n\n## Study Guide\n### Chapter 1\n- Affine Transformation\n    - [Definition](transformation/lecture_affine.html#definition)\n        - [Mathematical Definitions](transformation/lecture_affine.html#mathematical-definitions)\n    - [Practical Examples](transformation/lecture_affine.html#practical-examples)\n    - [Motivation](transformation/lecture_affine.html#motivation)\n    - [Getting Affine Transformation](transformation/lecture_affine.html#getting_affine-transformation)\n        - [Trigonometry Proof](transformation/lecture_affine.html#trigonometry-proof)\n    - [Code Illustrations](transformation/lecture_affine.html#code-illustrations)\n    - [Summary and Key Points](transformation/lecture_affine.html#summary-and-key-points)\n    - Optional video \n        - [Rotation Matrix Explained Visually](https://www.youtube.com/watch?v=tIixrNtLJ8U)\n            - [w/ Bahasa Indonesia voiceover](https://www.youtube.com/watch?v=pWfXR_HmyUw)\n    - References and learn-by-building modules\n\n### Chapter 2\n- Kernel Convolutions\n    - [Definition](edgedetect/kernel.html#definition)\n        - Optional video\n            -  [Kernel Convolutions Explained Visually](https://www.youtube.com/watch?v=WMmHcrX4Obg)\n        - [Mathematical Definitions](edgedetect/kernel.html#mathematical-definitions)\n        - [Padding](edgedetect/kernel.html#a-note-on-padding)\n    - [Smoothing and Blurring](edgedetect/kernel.html#smoothing-and-blurring)\n    - [A Note on Terminology](edgedetect/kernel.html#a-note-on-terminology)\n        - Kernels or Filters?\n        -   Correlations vs Convolutions?\n    - [Code Illustrations: Mean Filtering](edgedetect/kernel.html#code-illustrations-mean-filtering)\n    - [Role in Convolution Neural Networks](edgedetect/kernel.html#role-in-convolutional-neural-networks)\n    - [Handy Kernels for Image Processing](edgedetect/kernel.html#handy-kernels-for-image-processing)\n        - [Gaussian Filtering](edgedetect/kernel.html#gaussian-filtering)\n        - [Sharpening Kernels](edgedetect/kernel.html#sharpening-kernels)\n        - [Gaussian Kernels for Sharpening](edgedetect/kernel.html#approximate-gaussian-kernel-for-sharpening)\n        - [Unsharp Masking](edgedetect/kernel.html#unsharp-masking)\n    - [Summary and Key Points](edgedetect/kernel.html#summary-and-key-points)\n    - References and learn-by-building modules\n\n### Chapter 3\n- Edge Detection\n    - [Definition](edgedetect/edgedetect.html#definition)\n    - [Gradient-based Edge Detection](edgedetect/edgedetect.html#gradient-based-edge-detection)\n        - [Sobel Operator](edgedetect/edgedetect.html#sobel-operator)\n            - [Discrete Derivative](edgedetect/edgedetect.html#intuition-discrete-derivative)\n            - [Code Illustrations: Sobel Operator](edgedetect/edgedetect.html#code-illustrations-sobel-operator)\n        - [Gradient Orientation & Magnitude](edgedetect/edgedetect.html#dive-deeper-gradient-orientation-magnitude)\n    - [Image Segmentation](edgedetect/edgedetect.html#image-segmentation)\n        - [Intensity-based Segmentation](edgedetect/edgedetect.html#intensity-based-segmentation)\n            - [Simple Thresholding](edgedetect/edgedetect.html#simple-thresholding)\n            - [Adaptive Thresholding](edgedetect/edgedetect.html#adaptive-thresholding)\n        - [Edge-based Contour Estimation](edgedetect/edgedetect.html#edge-based-contour-estimation)\n            - [Contour Retrieval and Approximation](edgedetect/edgedetect.html#contour-retrieval-and-approximation)\n    - [Canny Edge Detector](edgedetect/edgedetect.html#canny-edge-detector)\n        - [Edge Thinning](edgedetect/edgedetect.html#edge-thinning)\n        - [Hysteresis Thresholding](edgedetect/edgedetect.html#hysteresis-thresholding)\n    - References and learn-by-building modules\n\n### Chapter 4\n- Digit Classification\n    - [A Note on Deep Learning](digitrecognition/digitrec.html#what-about-deep-learning)\n        - [Why not MNIST?](digitrecognition/digitrec.html#region-of-interest)\n    - Region of Interest\n        - [ROI identification](digitrecognition/digitrec.html#selecting-region-of-interest)\n        - [Arc Length and Area Size](digitrecognition/digitrec.html#arc-length-and-area-size)\n            - [Dive Deeper: ROI](digitrecognition/digitrec.html#dive-deeper-roi)\n        - [ROI extraction](digitrecognition/digitrec.html#roi-extraction)\n    - [Morphological Transformations](digitrecognition/digitrec.html#morphological-transformations)\n        - [Erosion](digitrecognition/digitrec.html#erosion)\n        - [Dilation](digitrecognition/digitrec.html#dilation)\n        - [Opening and Closing](digitrecognition/digitrec.html#opening-and-closing)\n        - [Learn-by-building: Morphological Transformation](digitrecognition/digitrec.html#learn-by-building-morphological-transformation)\n    - [Seven-segment display](digitrecognition/digitrec.html#seven-segment-display)\n        - [Practical Strategies](digitrecognition/digitrec.html#practical-strategies)\n            - [Contour Properties](digitrecognition/digitrec.html#contour-properties)\n    - [References and learn-by-building modules](digitrecognition/digitrec.html#references)\n\n### Chapter 5\n- Facial Recognition\n\n## Approach and Motivation\nThe course is foundational to anyone who wish to work with computer vision in Python. It covers some of the most common image processing routines, and have in-depth coverage on mathematical concepts present in the materials: \n- Math-first approach\n- Tons of sample python scripts (.py)\n    - 45+ python scripts from chapter 1 to 4 for plug-and-play experiments\n- Multimedia (image illustrations, video explanation, quiz)\n    - 57 image assets from chapter 1 to 4 for practical illustrations\n    - 4 PDFs, and 4 HTMLs, one for each chapter\n- Practical tips on real-world applications\n\nThe course's **only dependency** is `OpenCV`. Getting started is as easy as `pip install opencv-contrib-python` and you're set to go.\n\n##### Question: What about deep learning libraries?\n\nNo; While using deep learning for images made for interesting topics, they are probably better suited as an altogether separate course series. This course series (tutorial series) focused on the **essentials of computer vision** and,\nfor pedagogical reasons, try not to be overly ambitious with the scope it intends to cover. \n\nThere will be similarity in concepts and principles, as modern neural network architectures draw plenty of inspirations from \"classical\" computer vision techniques that predate it. By first learning how computer vision problems are solved, the student can compare that to the deep learning equivalent, which result in a more comprehensive appreciation of what deep learning offer to modern day computer scientists. \n\n## Course Materials Preview:\n### Python scripts\n![](digitrecognition/assets/croproi.gif)\n\n### PDF and HTML\n![](assets/ecv_caption.gif)\n\n\n# Workshops\nI conduct in-person lectures using the materials you find in this repository. These workshops are usually paid because there are upfront costs to afford a venue and crew. Not just any venue, but a learning environment that is fully equipped (audio, desks, charging points for everyone, massive screen projector, walking space fo teaching assistants, dinner). \n\nYou can follow me [on LinkedIn](http://linkedin.com/in/chansamuel/) to be updated about the latest workshops. I also make long-form programming tutorials and lessons on computer vision on [my YouTube channel](https://www.youtube.com/@SamuelChan)\n\n### Introduction to AI in Computer Vision\n- 4th January 2020, Jakarta\n    - Kantorkuu, Citywalk sudirman, Jakarta Pusat\n    - Time: 1300-1600\n    - 3 hour\n    - Fee: Free for Algoritma Alumni, 100k IDR for public\n\n### Computer Vision: Principles and Practice\n- 21st and 22nd January 2020, Jakarta\n    - Accelerice, Jl. Rasuna Said, Jakarta Selatan\n    - Time: 1830-2130\n    - 6 Hour\n    - Fee: Free for Algoritma Alumni, 1.5m IDR for public\n\n- 24th and 25th Feburary 2020, Bangkok\n    - JustCo, Samyan Mitrtown\n    - Time: 1830-2130\n    - 6 Hour\n    - Fee: Free for Algoritma Alumni, 9000 THB for public\n\n\n## Image Assets\n- `car2.png`, `pen.jpg`, `lego.jpg` and `sudoku.jpg` are under Creative Commons (CC) license.\n\n- `sarpi.jpg`, `castello.png`, `canal.png` and all other photography used are taken during my trip to Venice and you are free to use them. \n\n- All assets in Chapter 4 (the `digitrecognition` folder) are mine and you are free to use them.\n\n- All other illustrations are created by me in Keynote. \n\n- Videos are created by me, and Bahasa Indonesia voice over on my videos is by [Tiara Dwiputri](https://github.com/tiaradwiputri)\n\n## New to programming? 50-minute Quick Start\nHere's a video: [Computer Vision Essentials 1](https://youtu.be/NWXY4ASRlgA) I created to get you through the installation and taking the first step into this lesson path.\n\nIf you need help in the course, attend my in-person workshops on this topic (Computer Vision Essentials, free) throughout the course of the year.\n\n## Follow me\n- [YouTube](https://www.youtube.com/@SamuelChan)\n- [LinkedIn](http://linkedin.com/in/chansamuel/)\n- [GitHub](https://github.com/onlyphantom)\n"
  },
  {
    "path": "digitrecognition/contourarea_01.py",
    "content": "import cv2\n\nBCOLOR = (75, 0, 130)\nTHICKNESS = 4\n\nimg_color = cv2.imread(\"assets/ocbc.jpg\")\nimg_color = cv2.resize(img_color, None, None, fx=0.5, fy=0.5)\nimg = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)\n\nblurred = cv2.GaussianBlur(img, (7, 7), 0)\nblurred = cv2.bilateralFilter(blurred, 5, sigmaColor=50, sigmaSpace=50)\nedged = cv2.Canny(blurred, 130, 150, 255)\n\ncv2.imshow(\"Outline of device\", edged)\ncv2.waitKey(0)\n\ncnts, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n# sort contours by area, and get the first 10\ncnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:9]\n\ncv2.drawContours(img_color, cnts, 0, BCOLOR, THICKNESS)\ncv2.imshow(\"Target Contour\", img_color)\ncv2.waitKey(0)\n\nfor i, cnt in enumerate(cnts):\n    cv2.drawContours(img_color, cnts, i, BCOLOR, THICKNESS)\n    print(f\"ContourArea:{cv2.contourArea(cnt)}\")\n    cv2.imshow(\"Contour one by one\", img_color)\n    cv2.waitKey(0)\n"
  },
  {
    "path": "digitrecognition/contourarea_02.py",
    "content": "import cv2\n\nPURPLE = (75, 0, 130)\nYELLOW = (0, 255, 255)\nTHICKNESS = 4\nFONT = cv2.FONT_HERSHEY_SIMPLEX\n\nimg_color = cv2.imread(\"assets/ocbc.jpg\")\nimg_color = cv2.resize(img_color, None, None, fx=0.5, fy=0.5)\nimg = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)\n\nblurred = cv2.GaussianBlur(img, (7, 7), 0)\nblurred = cv2.bilateralFilter(blurred, 5, sigmaColor=50, sigmaSpace=50)\nedged = cv2.Canny(blurred, 130, 150, 255)\n\ncv2.imshow(\"Outline of device\", edged)\ncv2.waitKey(0)\n\ncnts, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n# sort contours by area, and get the first 10\ncnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:10]\n\nfor i, cnt in enumerate(cnts):\n    cv2.drawContours(img_color, cnts, i, PURPLE, THICKNESS)\n    x, y, w, h = cv2.boundingRect(cnt)\n    cv2.rectangle(img_color, (x, y), (x + w, y + h), YELLOW, THICKNESS)\n    area = round(cv2.contourArea(cnt), 1)\n    peri = round(cv2.arcLength(cnt, closed=True), 1)\n    print(f\"ContourArea:{area}, Peri: {peri}\")\n    cv2.putText(img_color, \"Area:\" + str(area), (x, y - 15), FONT, 0.4, PURPLE, 1)\n    cv2.putText(img_color, \"Perimeter:\" + str(peri), (x, y - 5), FONT, 0.4, PURPLE, 1)\n\ncv2.imshow(\"Contours\", img_color)\ncv2.waitKey(0)\n"
  },
  {
    "path": "digitrecognition/contourarea_03.py",
    "content": "import cv2\n\nPURPLE = (75, 0, 130)\nYELLOW = (0, 255, 255)\nTHICKNESS = 4\nFONT = cv2.FONT_HERSHEY_SIMPLEX\n\nimg_color = cv2.imread(\"assets/ocbc.jpg\")\nimg_color = cv2.resize(img_color, None, None, fx=0.5, fy=0.5)\nimg = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)\n\nblurred = cv2.GaussianBlur(img, (7, 7), 0)\nblurred = cv2.bilateralFilter(blurred, 5, sigmaColor=50, sigmaSpace=50)\nedged = cv2.Canny(blurred, 130, 150, 255)\n\ncv2.imshow(\"Outline of device\", edged)\ncv2.waitKey(0)\n\ncnts, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n# sort contours by area, and get the first 10\ncnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:9]\n\ncv2.drawContours(img_color, cnts, 0, PURPLE, THICKNESS)\ncv2.imshow(\"Target Contour\", img_color)\ncv2.waitKey(0)\n\nfor i in range(len(cnts)):\n    cv2.drawContours(img_color, cnts, i, PURPLE, THICKNESS)\n    print(f\"ContourArea:{cv2.contourArea(cnts[i])}\")\n    x, y, w, h = cv2.boundingRect(cnts[i])\n    cv2.rectangle(img_color, (x, y), (x + w, y + h), YELLOW, THICKNESS)\n\n    area = round(cv2.contourArea(cnts[i]), 1)\n    peri = round(cv2.arcLength(cnts[i], closed=True), 1)\n    print(f\"ContourArea:{area}, Peri: {peri}\")\n    cv2.putText(img_color, \"Area:\" + str(area), (x, y - 15), FONT, 0.4, PURPLE, 1)\n    cv2.putText(img_color, \"Perimeter:\" + str(peri), (x, y - 5), FONT, 0.4, PURPLE, 1)\n\n    cv2.imshow(\"Contour one by one\", img_color)\n    cv2.waitKey(0)\n\n"
  },
  {
    "path": "digitrecognition/digit_01.py",
    "content": "import cv2\nimport numpy as np\n\nFONT = cv2.FONT_HERSHEY_SIMPLEX\nCYAN = (255, 255, 0)\nDIGITSDICT = {\n    (1, 1, 1, 1, 1, 1, 0): 0,\n    (0, 1, 1, 0, 0, 0, 0): 1,\n    (1, 1, 0, 1, 1, 0, 1): 2,\n    (1, 1, 1, 1, 0, 0, 1): 3,\n    (0, 1, 1, 0, 0, 1, 1): 4,\n    (1, 0, 1, 1, 0, 1, 1): 5,\n    (1, 0, 1, 1, 1, 1, 1): 6,\n    (1, 1, 1, 0, 0, 1, 0): 7,\n    (1, 1, 1, 1, 1, 1, 1): 8,\n    (1, 1, 1, 1, 0, 1, 1): 9,\n}\n\n\n# roi_color = cv2.imread(\"inter/dbs-roi.png\")\nroi_color = cv2.imread(\"inter/ocbc-roi.png\")\nroi = cv2.cvtColor(roi_color, cv2.COLOR_BGR2GRAY)\n\nRATIO = roi.shape[0] * 0.2\n\nroi = cv2.bilateralFilter(roi, 5, 30, 60)\n\ntrimmed = roi[int(RATIO) :, int(RATIO) : roi.shape[1] - int(RATIO)]\nroi_color = roi_color[int(RATIO) :, int(RATIO) : roi.shape[1] - int(RATIO)]\ncv2.imshow(\"Blurred and Trimmed\", trimmed)\ncv2.waitKey(0)\n\nedged = cv2.adaptiveThreshold(\n    trimmed, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 5, 5\n)\ncv2.imshow(\"Edged\", edged)\ncv2.waitKey(0)\n\nkernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 5))\ndilated = cv2.dilate(edged, kernel, iterations=1)\n\ncv2.imshow(\"Dilated\", dilated)\ncv2.waitKey(0)\n\nkernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 1))\ndilated = cv2.dilate(dilated, kernel, iterations=1)\n\ncv2.imshow(\"Dilated x2\", dilated)\ncv2.waitKey(0)\n\nkernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 1),)\neroded = cv2.erode(dilated, kernel, iterations=1)\n\ncv2.imshow(\"Eroded\", eroded)\ncv2.waitKey(0)\n\nh = roi.shape[0]\nratio = int(h * 0.07)\neroded[-ratio:,] = 0\neroded[:, :ratio] = 0\n\ncv2.imshow(\"Eroded + Black\", eroded)\ncv2.waitKey(0)\n\ncnts, _ = cv2.findContours(eroded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\ndigits_cnts = []\n\ncanvas = trimmed.copy()\ncv2.drawContours(canvas, cnts, -1, (255, 255, 255), 1)\ncv2.imshow(\"All Contours\", canvas)\ncv2.waitKey(0)\n\ncanvas = trimmed.copy()\nfor cnt in cnts:\n    (x, y, w, h) = cv2.boundingRect(cnt)\n    if h > 20:\n        digits_cnts += [cnt]\n        cv2.rectangle(canvas, (x, y), (x + w, y + h), (0, 0, 0), 1)\n        cv2.drawContours(canvas, cnt, 0, (255, 255, 255), 1)\n        cv2.imshow(\"Digit Contours\", canvas)\n        cv2.waitKey(0)\n\nprint(f\"No. of Digit Contours: {len(digits_cnts)}\")\n\n\ncv2.imshow(\"Digit Contours\", canvas)\ncv2.waitKey(0)\n\n\nsorted_digits = sorted(digits_cnts, key=lambda cnt: cv2.boundingRect(cnt)[0])\n\ncanvas = trimmed.copy()\n\n\nfor i, cnt in enumerate(sorted_digits):\n    (x, y, w, h) = cv2.boundingRect(cnt)\n    cv2.rectangle(canvas, (x, y), (x + w, y + h), (0, 0, 0), 1)\n    cv2.putText(canvas, str(i), (x, y - 3), FONT, 0.3, (0, 0, 0), 1)\n\ncv2.imshow(\"All Contours sorted\", canvas)\ncv2.waitKey(0)\n\ndigits = []\ncanvas = roi_color.copy()\nfor cnt in sorted_digits:\n    (x, y, w, h) = cv2.boundingRect(cnt)\n    roi = eroded[y : y + h, x : x + w]\n    print(f\"W:{w}, H:{h}\")\n    # convenience units\n    qW, qH = int(w * 0.25), int(h * 0.15)\n    fractionH, halfH, fractionW = int(h * 0.05), int(h * 0.5), int(w * 0.25)\n\n    # seven segments in the order of wikipedia's illustration\n    sevensegs = [\n        ((0, 0), (w, qH)),  # a (top bar)\n        ((w - qW, 0), (w, halfH)),  # b (upper right)\n        ((w - qW, halfH), (w, h)),  # c (lower right)\n        ((0, h - qH), (w, h)),  # d (lower bar)\n        ((0, halfH), (qW, h)),  # e (lower left)\n        ((0, 0), (qW, halfH)),  # f (upper left)\n        # ((0, halfH - fractionH), (w, halfH + fractionH)) # center\n        (\n            (0 + fractionW, halfH - fractionH),\n            (w - fractionW, halfH + fractionH),\n        ),  # center\n    ]\n\n    # initialize to off\n    on = [0] * 7\n\n    for (i, ((p1x, p1y), (p2x, p2y))) in enumerate(sevensegs):\n        region = roi[p1y:p2y, p1x:p2x]\n        print(\n            f\"{i}: Sum of 1: {np.sum(region == 255)}, Sum of 0: {np.sum(region == 0)}, Shape: {region.shape}, Size: {region.size}\"\n        )\n        if np.sum(region == 255) > region.size * 0.5:\n            on[i] = 1\n        print(f\"State of ON: {on}\")\n\n    digit = DIGITSDICT[tuple(on)]\n    print(f\"Digit is: {digit}\")\n    digits += [digit]\n    cv2.rectangle(canvas, (x, y), (x + w, y + h), CYAN, 1)\n    cv2.putText(canvas, str(digit), (x - 5, y + 6), FONT, 0.3, (0, 0, 0), 1)\n    cv2.imshow(\"Digit\", canvas)\n    cv2.waitKey(0)\n\nprint(f\"Digits on the token are: {digits}\")\n\n"
  },
  {
    "path": "digitrecognition/digitrec.html",
    "content": "<!DOCTYPE html><html><head>\n      <title>digitrec</title>\n      <meta charset=\"utf-8\">\n      <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n      \n      \n        <script type=\"text/x-mathjax-config\">\n          MathJax.Hub.Config({\"extensions\":[\"tex2jax.js\"],\"jax\":[\"input/TeX\",\"output/HTML-CSS\"],\"messageStyle\":\"none\",\"tex2jax\":{\"processEnvironments\":false,\"processEscapes\":true,\"inlineMath\":[[\"$\",\"$\"],[\"\\\\(\",\"\\\\)\"]],\"displayMath\":[[\"$$\",\"$$\"],[\"\\\\[\",\"\\\\]\"]]},\"TeX\":{\"extensions\":[\"AMSmath.js\",\"AMSsymbols.js\",\"noErrors.js\",\"noUndefined.js\"]},\"HTML-CSS\":{\"availableFonts\":[\"TeX\"]}});\n        </script>\n        <script type=\"text/javascript\" async src=\"file:////Users/samuel/.vscode/extensions/shd101wyy.markdown-preview-enhanced-0.5.1/node_modules/@shd101wyy/mume/dependencies/mathjax/MathJax.js\" charset=\"UTF-8\"></script>\n        \n      \n      \n\n      \n      \n      \n      \n      \n      \n      \n\n      <style>\n      /**\n * prism.js Github theme based on GitHub's theme.\n * @author Sam Clarke\n */\ncode[class*=\"language-\"],\npre[class*=\"language-\"] {\n  color: #333;\n  background: none;\n  font-family: Consolas, \"Liberation Mono\", Menlo, Courier, monospace;\n  text-align: left;\n  white-space: pre;\n  word-spacing: normal;\n  word-break: normal;\n  word-wrap: normal;\n  line-height: 1.4;\n\n  -moz-tab-size: 8;\n  -o-tab-size: 8;\n  tab-size: 8;\n\n  -webkit-hyphens: none;\n  -moz-hyphens: none;\n  -ms-hyphens: none;\n  hyphens: none;\n}\n\n/* Code blocks */\npre[class*=\"language-\"] {\n  padding: .8em;\n  overflow: auto;\n  /* border: 1px solid #ddd; */\n  border-radius: 3px;\n  /* background: #fff; */\n  background: #f5f5f5;\n}\n\n/* Inline code */\n:not(pre) > code[class*=\"language-\"] {\n  padding: .1em;\n  border-radius: .3em;\n  white-space: normal;\n  background: #f5f5f5;\n}\n\n.token.comment,\n.token.blockquote {\n  color: #969896;\n}\n\n.token.cdata {\n  color: #183691;\n}\n\n.token.doctype,\n.token.punctuation,\n.token.variable,\n.token.macro.property {\n  color: #333;\n}\n\n.token.operator,\n.token.important,\n.token.keyword,\n.token.rule,\n.token.builtin {\n  color: #a71d5d;\n}\n\n.token.string,\n.token.url,\n.token.regex,\n.token.attr-value {\n  color: #183691;\n}\n\n.token.property,\n.token.number,\n.token.boolean,\n.token.entity,\n.token.atrule,\n.token.constant,\n.token.symbol,\n.token.command,\n.token.code {\n  color: #0086b3;\n}\n\n.token.tag,\n.token.selector,\n.token.prolog {\n  color: #63a35c;\n}\n\n.token.function,\n.token.namespace,\n.token.pseudo-element,\n.token.class,\n.token.class-name,\n.token.pseudo-class,\n.token.id,\n.token.url-reference .token.variable,\n.token.attr-name {\n  color: #795da3;\n}\n\n.token.entity {\n  cursor: help;\n}\n\n.token.title,\n.token.title .token.punctuation {\n  font-weight: bold;\n  color: #1d3e81;\n}\n\n.token.list {\n  color: #ed6a43;\n}\n\n.token.inserted {\n  background-color: #eaffea;\n  color: #55a532;\n}\n\n.token.deleted {\n  background-color: #ffecec;\n  color: #bd2c00;\n}\n\n.token.bold {\n  font-weight: bold;\n}\n\n.token.italic {\n  font-style: italic;\n}\n\n\n/* JSON */\n.language-json .token.property {\n  color: #183691;\n}\n\n.language-markup .token.tag .token.punctuation {\n  color: #333;\n}\n\n/* CSS */\ncode.language-css,\n.language-css .token.function {\n  color: #0086b3;\n}\n\n/* YAML */\n.language-yaml .token.atrule {\n  color: #63a35c;\n}\n\ncode.language-yaml {\n  color: #183691;\n}\n\n/* Ruby */\n.language-ruby .token.function {\n  color: #333;\n}\n\n/* Markdown */\n.language-markdown .token.url {\n  color: #795da3;\n}\n\n/* Makefile */\n.language-makefile .token.symbol {\n  color: #795da3;\n}\n\n.language-makefile .token.variable {\n  color: #183691;\n}\n\n.language-makefile .token.builtin {\n  color: #0086b3;\n}\n\n/* Bash */\n.language-bash .token.keyword {\n  color: #0086b3;\n}\n\n/* highlight */\npre[data-line] {\n  position: relative;\n  padding: 1em 0 1em 3em;\n}\npre[data-line] .line-highlight-wrapper {\n  position: absolute;\n  top: 0;\n  left: 0;\n  background-color: transparent;\n  display: block;\n  width: 100%;\n}\n\npre[data-line] .line-highlight {\n  position: absolute;\n  left: 0;\n  right: 0;\n  padding: inherit 0;\n  margin-top: 1em;\n  background: hsla(24, 20%, 50%,.08);\n  background: linear-gradient(to right, hsla(24, 20%, 50%,.1) 70%, hsla(24, 20%, 50%,0));\n  pointer-events: none;\n  line-height: inherit;\n  white-space: pre;\n}\n\npre[data-line] .line-highlight:before, \npre[data-line] .line-highlight[data-end]:after {\n  content: attr(data-start);\n  position: absolute;\n  top: .4em;\n  left: .6em;\n  min-width: 1em;\n  padding: 0 .5em;\n  background-color: hsla(24, 20%, 50%,.4);\n  color: hsl(24, 20%, 95%);\n  font: bold 65%/1.5 sans-serif;\n  text-align: center;\n  vertical-align: .3em;\n  border-radius: 999px;\n  text-shadow: none;\n  box-shadow: 0 1px white;\n}\n\npre[data-line] .line-highlight[data-end]:after {\n  content: attr(data-end);\n  top: auto;\n  bottom: .4em;\n}html body{font-family:\"Helvetica Neue\",Helvetica,\"Segoe UI\",Arial,freesans,sans-serif;font-size:16px;line-height:1.6;color:#333;background-color:#fff;overflow:initial;box-sizing:border-box;word-wrap:break-word}html body>:first-child{margin-top:0}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{line-height:1.2;margin-top:1em;margin-bottom:16px;color:#000}html body h1{font-size:2.25em;font-weight:300;padding-bottom:.3em}html body h2{font-size:1.75em;font-weight:400;padding-bottom:.3em}html body h3{font-size:1.5em;font-weight:500}html body h4{font-size:1.25em;font-weight:600}html body h5{font-size:1.1em;font-weight:600}html body h6{font-size:1em;font-weight:600}html body h1,html body h2,html body h3,html body h4,html body h5{font-weight:600}html body h5{font-size:1em}html body h6{color:#5c5c5c}html body strong{color:#000}html body del{color:#5c5c5c}html body a:not([href]){color:inherit;text-decoration:none}html body a{color:#08c;text-decoration:none}html body a:hover{color:#00a3f5;text-decoration:none}html body img{max-width:100%}html body>p{margin-top:0;margin-bottom:16px;word-wrap:break-word}html body>ul,html body>ol{margin-bottom:16px}html body ul,html body ol{padding-left:2em}html body ul.no-list,html body ol.no-list{padding:0;list-style-type:none}html body ul ul,html body ul ol,html body ol ol,html body ol ul{margin-top:0;margin-bottom:0}html body li{margin-bottom:0}html body li.task-list-item{list-style:none}html body li>p{margin-top:0;margin-bottom:0}html body .task-list-item-checkbox{margin:0 .2em .25em -1.8em;vertical-align:middle}html body .task-list-item-checkbox:hover{cursor:pointer}html body blockquote{margin:16px 0;font-size:inherit;padding:0 15px;color:#5c5c5c;border-left:4px solid #d6d6d6}html body blockquote>:first-child{margin-top:0}html body blockquote>:last-child{margin-bottom:0}html body hr{height:4px;margin:32px 0;background-color:#d6d6d6;border:0 none}html body table{margin:10px 0 15px 0;border-collapse:collapse;border-spacing:0;display:block;width:100%;overflow:auto;word-break:normal;word-break:keep-all}html body table th{font-weight:bold;color:#000}html body table td,html body table th{border:1px solid #d6d6d6;padding:6px 13px}html body dl{padding:0}html body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:bold}html body dl dd{padding:0 16px;margin-bottom:16px}html body code{font-family:Menlo,Monaco,Consolas,'Courier New',monospace;font-size:.85em !important;color:#000;background-color:#f0f0f0;border-radius:3px;padding:.2em 0}html body code::before,html body code::after{letter-spacing:-0.2em;content:\"\\00a0\"}html body pre>code{padding:0;margin:0;font-size:.85em !important;word-break:normal;white-space:pre;background:transparent;border:0}html body .highlight{margin-bottom:16px}html body .highlight pre,html body pre{padding:1em;overflow:auto;font-size:.85em !important;line-height:1.45;border:#d6d6d6;border-radius:3px}html body .highlight pre{margin-bottom:0;word-break:normal}html body pre code,html body pre tt{display:inline;max-width:initial;padding:0;margin:0;overflow:initial;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}html body pre code:before,html body pre tt:before,html body pre code:after,html body pre tt:after{content:normal}html body p,html body blockquote,html body ul,html body ol,html body dl,html body pre{margin-top:0;margin-bottom:16px}html body kbd{color:#000;border:1px solid #d6d6d6;border-bottom:2px solid #c7c7c7;padding:2px 4px;background-color:#f0f0f0;border-radius:3px}@media print{html body{background-color:#fff}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{color:#000;page-break-after:avoid}html body blockquote{color:#5c5c5c}html body pre{page-break-inside:avoid}html body table{display:table}html body img{display:block;max-width:100%;max-height:100%}html body pre,html body code{word-wrap:break-word;white-space:pre}}.markdown-preview{width:100%;height:100%;box-sizing:border-box}.markdown-preview .pagebreak,.markdown-preview .newpage{page-break-before:always}.markdown-preview pre.line-numbers{position:relative;padding-left:3.8em;counter-reset:linenumber}.markdown-preview pre.line-numbers>code{position:relative}.markdown-preview pre.line-numbers .line-numbers-rows{position:absolute;pointer-events:none;top:1em;font-size:100%;left:0;width:3em;letter-spacing:-1px;border-right:1px solid #999;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.markdown-preview pre.line-numbers .line-numbers-rows>span{pointer-events:none;display:block;counter-increment:linenumber}.markdown-preview pre.line-numbers .line-numbers-rows>span:before{content:counter(linenumber);color:#999;display:block;padding-right:.8em;text-align:right}.markdown-preview .mathjax-exps .MathJax_Display{text-align:center !important}.markdown-preview:not([for=\"preview\"]) .code-chunk .btn-group{display:none}.markdown-preview:not([for=\"preview\"]) .code-chunk .status{display:none}.markdown-preview:not([for=\"preview\"]) .code-chunk .output-div{margin-bottom:16px}.scrollbar-style::-webkit-scrollbar{width:8px}.scrollbar-style::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}.scrollbar-style::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for=\"html-export\"]:not([data-presentation-mode]){position:relative;width:100%;height:100%;top:0;left:0;margin:0;padding:0;overflow:auto}html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{position:relative;top:0}@media screen and (min-width:914px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{padding:2em calc(50% - 457px + 2em)}}@media screen and (max-width:914px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{font-size:14px !important;padding:1em}}@media print{html body[for=\"html-export\"]:not([data-presentation-mode]) #sidebar-toc-btn{display:none}}html body[for=\"html-export\"]:not([data-presentation-mode]) #sidebar-toc-btn{position:fixed;bottom:8px;left:8px;font-size:28px;cursor:pointer;color:inherit;z-index:99;width:32px;text-align:center;opacity:.4}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] #sidebar-toc-btn{opacity:1}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc{position:fixed;top:0;left:0;width:300px;height:100%;padding:32px 0 48px 0;font-size:14px;box-shadow:0 0 4px rgba(150,150,150,0.33);box-sizing:border-box;overflow:auto;background-color:inherit}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar{width:8px}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc a{text-decoration:none}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{padding:0 1.6em;margin-top:.8em}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc li{margin-bottom:.8em}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{list-style-type:none}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{left:300px;width:calc(100% -  300px);padding:2em calc(50% - 457px -  150px);margin:0;box-sizing:border-box}@media screen and (max-width:1274px){html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{width:100%}}html body[for=\"html-export\"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .markdown-preview{left:50%;transform:translateX(-50%)}html body[for=\"html-export\"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .md-sidebar-toc{display:none}\n/* Please visit the URL below for more information: */\n/*   https://shd101wyy.github.io/markdown-preview-enhanced/#/customize-css */\n.markdown-preview.markdown-preview h1,\n.markdown-preview.markdown-preview h2,\n.markdown-preview.markdown-preview h3,\n.markdown-preview.markdown-preview h4,\n.markdown-preview.markdown-preview h5,\n.markdown-preview.markdown-preview h6 {\n  font-weight: bolder;\n  text-decoration-line: underline;\n}\n\n      </style>\n    </head>\n    <body for=\"html-export\">\n      <div class=\"mume markdown-preview  \">\n      <h1 class=\"mume-header\" id=\"background\">Background</h1>\n\n<p>In Chapter 4: Digit Recognition, we&apos;ll add a few new techniques to our image processing toolset by attempting to build a digit recognition pipeline from start to finish. Throughout the exercise, we will get to practice the image preprocessing tricks we&apos;ve picked up from previous chapters:</p>\n<ul>\n<li>Image manipulations such as resizing, cropping, rotation, color conversion</li>\n<li>Blurring and sharpening operations</li>\n<li>Thresholding and Edge Detection</li>\n<li>Contour approximation</li>\n</ul>\n<p>New method and strategies that you&apos;ll be learning include:</p>\n<ul>\n<li>Drawing operations (rectangles, text) on our image</li>\n<li>Region of interest and bounding rectangles</li>\n<li>Morphological transformations</li>\n<li>The Seven-Segment Display</li>\n</ul>\n<h2 class=\"mume-header\" id=\"what-about-deep-learning\">What about Deep Learning?</h2>\n\n<p>To be clear, specialised deep learning libraries that have sprung out in recent years are a lot more robust in their approach. By utilizing machine learning principles (cost function, gradient descent etc), these specialised libraries can handle highly complex object recognition and OCR (optical character recognition) tasks at the cost of brute computing power.</p>\n<p>The overarching motivation of this free course however, was to make clear to beginners what constitutes artificial intelligence, and to illustrate the principle benefits of machine learning. I try to achieve that by demonstrating -- over multiple chapters of this course -- how computer visions were traditionally, or rather &quot;classically&quot;, performed prior to the emergence of deep learning.</p>\n<p>By learning the classical approaches to computer vision, the student (you) can compare the effort it takes to hand-tuning parameters and this adds a new dimension of appreciation towards self-learning methods that we&apos;ll discuss in the near future.</p>\n<h2 class=\"mume-header\" id=\"region-of-interest\">Region of Interest</h2>\n\n<p>Do a quick google search on &quot;digit recognition&quot; or &quot;digit classification&quot; and it&apos;s hard to find an introductory deep learning course that <strong>doesn&apos;t use</strong> the famous MNIST (Modified National Institute of Standards and Technology)<sup class=\"footnote-ref\"><a href=\"#fn1\" id=\"fnref1\">[1]</a></sup> database. This is a handwritten digit database that has long become the <em>de facto</em> in pretty much any machine learning tutorials:</p>\n<p><img src=\"assets/mnist.png\" alt></p>\n<p>But I&apos;d argue, that for a budding computer vision developer, your learning objectives are better served by taking a different approach.</p>\n<p>By choosing real life images, you are confronted with a few more key challenges that are not present from using a well-curated database such as MNIST. These challenges present new opportunities to learn about key concepts such as <strong>region of interest</strong>, and <strong>morphological operations</strong>, that you will come to rely upon greatly in the future.</p>\n<p>First, take a look at 4 real-life pictures of security tokens issued by banks and institutional agencies (left-to-right: Bank Central Asia, DBS, OCBC Bank, OneKey for Singapore Government e-services):</p>\n<p><img src=\"assets/securitytokens.png\" alt></p>\n<p>Notice how noisy these images are, as each image is shot with a different background, different lighting conditions, each token is of a different size and shape, and the different colors in each security token etc.</p>\n<p>Your task, as a computer vision developer, is to develop a pipeline that, in each phase, take you closer to the goal. Roughly speaking, given the above task, we would formulate a pipeline that looks like the following:</p>\n<ol>\n<li>Preprocessing, noise reduction</li>\n<li>Contour approximation</li>\n<li>Find region of interest (ROI), that is the area of the LCD display in each of these pictures</li>\n<li>Extract ROI for further preprocessing, discarding the rest of the image</li>\n<li>Isolate each digit from the ROI</li>\n<li>Iteratively classify each digit in the image</li>\n<li>Combine the per-digit classification to a final string (&quot;output&quot;)</li>\n</ol>\n<p>In practice, step (1) and (2) above is the &quot;application&quot; of the methods you&apos;ve learned in previous chapters of this series. As we&apos;ll soon observe, we will use a combination of blurring operations and edge detection to draw our contours. Among the contours, one of them would be the LCD display containing the digits to be classified. That is our <strong>Region of Interest</strong>.</p>\n<p><img src=\"assets/croproi.gif\" alt></p>\n<h3 class=\"mume-header\" id=\"selecting-region-of-interest\">Selecting Region of Interest</h3>\n\n<p>The GIF above demonstrates the code in <code>roi_01.py</code> but essentially it shows the <code>selectROI</code> method in action. You&apos;ll commonly combined the <code>selectROI</code> method with a either a slicing operation to crop your region of interest, or a drawing operation to call attention to the specific region of the image.</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">x<span class=\"token punctuation\">,</span>y<span class=\"token punctuation\">,</span>w<span class=\"token punctuation\">,</span>h <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>selectROI<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Region of interest&quot;</span><span class=\"token punctuation\">,</span> img<span class=\"token punctuation\">)</span>\ncropped <span class=\"token operator\">=</span> img<span class=\"token punctuation\">[</span>y<span class=\"token punctuation\">:</span>y<span class=\"token operator\">+</span>h<span class=\"token punctuation\">,</span> x<span class=\"token punctuation\">:</span>x<span class=\"token operator\">+</span>w<span class=\"token punctuation\">]</span>\n<span class=\"token comment\"># draw rectangle </span>\ncv2<span class=\"token punctuation\">.</span>rectangle<span class=\"token punctuation\">(</span>img_color<span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>x<span class=\"token punctuation\">,</span>y<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>x<span class=\"token operator\">+</span>w<span class=\"token punctuation\">,</span>y<span class=\"token operator\">+</span>h<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token number\">255</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">)</span>\n</pre><p>In most cases, it simply wouldn&apos;t be realistic to render an image before manually specifying our region of interest. We&apos;ll need this operation to be as close to automatic as possible. But how exactly? That depends greatly on the specific problem set.</p>\n<p>In some cases, the obvious choice of strategy would be simply shape recognition, say by counting the number of vertices from each contour. The following code is an example implementation of that:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token comment\"># cnt = contour</span>\nperi <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>arcLength<span class=\"token punctuation\">(</span>cnt<span class=\"token punctuation\">,</span> <span class=\"token boolean\">True</span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># contour approximation</span>\ncnt_appro <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>approxPolyDP<span class=\"token punctuation\">(</span>cnt<span class=\"token punctuation\">,</span> <span class=\"token number\">0.03</span> <span class=\"token operator\">*</span> peri<span class=\"token punctuation\">,</span> <span class=\"token boolean\">True</span><span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">if</span> <span class=\"token builtin\">len</span><span class=\"token punctuation\">(</span>cnt_approx<span class=\"token punctuation\">)</span> <span class=\"token operator\">==</span> <span class=\"token number\">3</span><span class=\"token punctuation\">:</span>\n    est_shape <span class=\"token operator\">=</span> <span class=\"token string\">&apos;triangle&apos;</span>\n<span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span>\n<span class=\"token keyword\">elif</span> <span class=\"token builtin\">len</span><span class=\"token punctuation\">(</span>cnt_approx<span class=\"token punctuation\">)</span> <span class=\"token operator\">==</span> <span class=\"token number\">5</span><span class=\"token punctuation\">:</span>\n    est_shape <span class=\"token operator\">=</span> <span class=\"token string\">&apos;pentagon&apos;</span>\n<span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span>\n</pre><p>In other cases, you may employ a strategy that try to match contour based on Hu moments (which we&apos;ll study in details in future chapters).</p>\n<p>Other methods may involve a saliency map, or a visual attention map, for ROI extraction. These methods create a new representation of the original image where each pixel&apos;s <strong>unique quality</strong> are amplified or emphasized. One example implementation on Wikipedia<sup class=\"footnote-ref\"><a href=\"#fn2\" id=\"fnref2\">[2]</a></sup> demonstrates how straightforward this concept really is:</p>\n<p></p><div class=\"mathjax-exps\">$$SALS(I_K) = \\sum^{N}_{i=1}|I_k-I_i|$$</div><p></p>\n<p>As you add new tools and strategies to your computer vision toolbox, you will pick up new approaches to ROI extraction. It is an interesting field of research that has been gaining a lot in popularity with the emergence of deep learning.</p>\n<p>As for the images of bank security tokens, can you think of an approach that may be a good fit? Our region of interest is the LCD screen at the top of the button pad on each device, and they all seem to be rather consistent in shape and size. Give it some thought and read on to find out.</p>\n<h3 class=\"mume-header\" id=\"arc-length-and-area-size\">Arc Length and Area Size</h3>\n\n<p>I&apos;ve hinted at the shape and size being a factor, so maybe that would be a good starting point. The good news is the OpenCV made this incredibly easy through the <code>contourArea()</code> and <code>arcLength()</code> function.</p>\n<p>The following snippet of code, lifted from <code>contourarea_01.py</code>, finds all contours and sort them by area size in descending order before storing the first 10 in <code>cnts</code>:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">cnts<span class=\"token punctuation\">,</span> _ <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>findContours<span class=\"token punctuation\">(</span>edged<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>RETR_EXTERNAL<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>CHAIN_APPROX_SIMPLE<span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># sort contours by contourArea, and get the first 10</span>\ncnts <span class=\"token operator\">=</span> <span class=\"token builtin\">sorted</span><span class=\"token punctuation\">(</span>cnts<span class=\"token punctuation\">,</span> key<span class=\"token operator\">=</span>cv2<span class=\"token punctuation\">.</span>contourArea<span class=\"token punctuation\">,</span> reverse<span class=\"token operator\">=</span><span class=\"token boolean\">True</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">:</span><span class=\"token number\">9</span><span class=\"token punctuation\">]</span>\n</pre><p>We can also obtain the contour area and parameter iteratively in a for-loop, like the following:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">cnts<span class=\"token punctuation\">,</span> _ <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>findContours<span class=\"token punctuation\">(</span>edged<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>RETR_EXTERNAL<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>CHAIN_APPROX_SIMPLE<span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">for</span> i <span class=\"token keyword\">in</span> <span class=\"token builtin\">range</span><span class=\"token punctuation\">(</span><span class=\"token builtin\">len</span><span class=\"token punctuation\">(</span>cnts<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span>\n    area <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>contourArea<span class=\"token punctuation\">(</span>cnts<span class=\"token punctuation\">[</span>i<span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\n    peri <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>arcLength<span class=\"token punctuation\">(</span>cnts<span class=\"token punctuation\">[</span>i<span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> closed<span class=\"token operator\">=</span><span class=\"token boolean\">True</span><span class=\"token punctuation\">)</span>\n    <span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&apos;Area:</span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>area<span class=\"token punctuation\">}</span></span><span class=\"token string\">, Perimeter:</span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>peri<span class=\"token punctuation\">}</span></span><span class=\"token string\">&apos;</span></span><span class=\"token punctuation\">)</span>\n</pre><p>In effect, we&apos;re looping through each contour that the <code>findContours()</code> operation found, and computing two values each time, <code>area</code> and <code>peri</code>.</p>\n<p>Note that the contour perimeter is also known as the arc length. The second argument <code>closed</code> specify whether the shape is a closed contour (<code>True</code>) or just a curve (<code>closed=False</code>).</p>\n<p>Execute <code>contourarea_01.py</code> and observe how each contour is displayed, from the one with the largest area to the one with the least, for a total of 10 contours. As you run the script on different pictures of bank security tokens, you see that it does a reliable job at finding the contours, sorting them, and returning our LCD display screen as the first in the list. This makes sense, because visually it is apparent that the LCD display occupy the largest area among other closed shapes in our picture.</p>\n<h4 class=\"mume-header\" id=\"dive-deeper-roi\">Dive Deeper: ROI</h4>\n\n<ol>\n<li>\n<p>Use <code>assets/dbs.jpg</code> instead of <code>assets/ocbc.jpg</code> in <code>contourarea_01.py</code>. Were you able to extract the region of interest (LCD Display) successfully without any changes to the script?</p>\n</li>\n<li>\n<p>Could we have successfully extract our region of interest have we used <code>arcLength</code> in our strategy?</p>\n</li>\n<li>\n<p>Supposed we only wanted to extract the region of interest and not the rest, which line of code would you change? Reflect the change in the code and execute it to confirm that you have performed this exercise correctly.</p>\n</li>\n<li>\n<p>Supposed we wanted the contours sorted according to their respective area, from the smallest to the largest, which line of code would you change? Reflect the change in the code and execute it to confirm that you have performed this exercise correctly.</p>\n</li>\n</ol>\n<p>While working through the exercises above, you may find it helpful to also draw the text describing the area size and perimeter next to each contour. I&apos;ve shown you how this can be done in <code>contourarea_02.py</code> but the essential addition we make to the earlier code is the two calls to <code>putText()</code>:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">PURPLE <span class=\"token operator\">=</span> <span class=\"token punctuation\">(</span><span class=\"token number\">75</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">130</span><span class=\"token punctuation\">)</span>\nTHICKNESS <span class=\"token operator\">=</span> <span class=\"token number\">1</span>\nFONT <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>FONT_HERSHEY_SIMPLEX\ncv2<span class=\"token punctuation\">.</span>putText<span class=\"token punctuation\">(</span>img_color<span class=\"token punctuation\">,</span> <span class=\"token string\">&quot;Area:&quot;</span> <span class=\"token operator\">+</span> <span class=\"token builtin\">str</span><span class=\"token punctuation\">(</span>area<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>x<span class=\"token punctuation\">,</span> y <span class=\"token operator\">-</span> <span class=\"token number\">15</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> FONT<span class=\"token punctuation\">,</span> <span class=\"token number\">0.4</span><span class=\"token punctuation\">,</span> PURPLE<span class=\"token punctuation\">,</span>THICKNESS<span class=\"token punctuation\">)</span>\ncv2<span class=\"token punctuation\">.</span>putText<span class=\"token punctuation\">(</span>img_color<span class=\"token punctuation\">,</span> <span class=\"token string\">&quot;Perimeter:&quot;</span> <span class=\"token operator\">+</span> <span class=\"token builtin\">str</span><span class=\"token punctuation\">(</span>peri<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>x<span class=\"token punctuation\">,</span> y <span class=\"token operator\">-</span> <span class=\"token number\">5</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> FONT<span class=\"token punctuation\">,</span> <span class=\"token number\">0.4</span><span class=\"token punctuation\">,</span>PURPLE<span class=\"token punctuation\">,</span> THICKNESS<span class=\"token punctuation\">)</span>\n</pre><p><img src=\"assets/textcontour.png\" alt></p>\n<h3 class=\"mume-header\" id=\"roi-extraction\">ROI extraction</h3>\n\n<p>With these foundations, we are now ready to write a simple utility script that:</p>\n<ol>\n<li>Find our region of interest</li>\n<li>Crop ROI into a new image</li>\n<li>Save it into an folder named <code>/inter</code> (intermediary) for the actual digit recognition later</li>\n</ol>\n<p>Much of what you need to do has already been presented so far, but the core pieces are, lifted from <code>roi_02.py</code> the following few lines of code:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">img <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">)</span>\nblurred <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>GaussianBlur<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token number\">7</span><span class=\"token punctuation\">,</span> <span class=\"token number\">7</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">)</span>\nedged <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>Canny<span class=\"token punctuation\">(</span>blurred<span class=\"token punctuation\">,</span> <span class=\"token number\">130</span><span class=\"token punctuation\">,</span> <span class=\"token number\">150</span><span class=\"token punctuation\">,</span> <span class=\"token number\">255</span><span class=\"token punctuation\">)</span>\ncnts<span class=\"token punctuation\">,</span> _ <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>findContours<span class=\"token punctuation\">(</span>edged<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>RETR_EXTERNAL<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>CHAIN_APPROX_SIMPLE<span class=\"token punctuation\">)</span>\ncnts <span class=\"token operator\">=</span> <span class=\"token builtin\">sorted</span><span class=\"token punctuation\">(</span>cnts<span class=\"token punctuation\">,</span> key<span class=\"token operator\">=</span>cv2<span class=\"token punctuation\">.</span>contourArea<span class=\"token punctuation\">,</span> reverse<span class=\"token operator\">=</span><span class=\"token boolean\">True</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">:</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span>\n\nx<span class=\"token punctuation\">,</span> y<span class=\"token punctuation\">,</span> w<span class=\"token punctuation\">,</span> h <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>boundingRect<span class=\"token punctuation\">(</span>cnts<span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\nroi <span class=\"token operator\">=</span> img<span class=\"token punctuation\">[</span>y <span class=\"token punctuation\">:</span> y <span class=\"token operator\">+</span> h<span class=\"token punctuation\">,</span> x <span class=\"token punctuation\">:</span> x <span class=\"token operator\">+</span> w<span class=\"token punctuation\">]</span>\ncv2<span class=\"token punctuation\">.</span>imwrite<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;roi.png&quot;</span><span class=\"token punctuation\">,</span> roi<span class=\"token punctuation\">)</span>\n</pre><p>The <code>roi_02.py</code> utility script uses the <code>argparse</code> library so user can specify a file path with a flag <code>-p</code> (or <code>--path</code>) like such:</p>\n<pre data-role=\"codeBlock\" data-info=\"bash\" class=\"language-bash\">python roi_02.py -p assets/ocbc.jpg\n<span class=\"token comment\"># equivalent:</span>\npython roi_02.py --path assets/ocbc.jpg\n</pre><p>If the user do not specify a file path using the <code>-p</code> flag, the default value would be <code>assets/ocbc.jpg</code>. If you wish to change this, edit <code>roi_02.py</code> and specify a different value for the <code>default</code> parameter.</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">parser <span class=\"token operator\">=</span> argparse<span class=\"token punctuation\">.</span>ArgumentParser<span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span>\nparser<span class=\"token punctuation\">.</span>add_argument<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;-p&quot;</span><span class=\"token punctuation\">,</span> <span class=\"token string\">&quot;--path&quot;</span><span class=\"token punctuation\">,</span> default<span class=\"token operator\">=</span><span class=\"token string\">&quot;assets/ocbc.jpg&quot;</span><span class=\"token punctuation\">)</span>\n</pre><p>You should run this exercise using <code>dbs.jpg</code>, <code>ocbc2.jpg</code>, or <code>onekey.jpg</code> at least once. Execute the script and check the <code>inter</code> folder to confirm that the ROI has been saved. When you&apos;re done, you are ready to move on to the next phase of the digit recognition pipeline.</p>\n<h2 class=\"mume-header\" id=\"morphological-transformations\">Morphological Transformations</h2>\n\n<p>Once the region of interest is obtained, we now have an image that may still contain noises. This is especially the case when our ROI is obtained by means of thresholding methods, since you can expect some &quot;non-features&quot; (noises) to also be included in the resulting image.</p>\n<p>To account for these imperfections, we will now perform a series of operations on our image. We&apos;ll learn what they are formally, but let&apos;s begin by seeing what is it that they <em>offer</em> to our image processing pipeline. I&apos;ve included a picture with some random noise, as follow:</p>\n<p><img src=\"assets/0417s.png\" alt></p>\n<p>The digit &quot;0417&quot; is clearly discernible to the human eye despite the presence of noise. However, consider the perspective of a global thresholding operation; These pixel values are &quot;noise&quot; to us but a computer has no such notion of which pixel values are meaningful and what others are not. A thresold value such as the global mean will take all values into account indiscriminately. A contour finding operation will, instead of 4, return thousands of tiny round segments (they may be tiny, but they are completely valid contours).</p>\n<p>An image processing pipeline that fail to account for these may result in sub-optimal performance or, very often, completely undesired results.</p>\n<p>Enter two of the most fundamental morphological transformations: <strong>erosion</strong> and <strong>dilation</strong>.</p>\n<h3 class=\"mume-header\" id=\"erosion\">Erosion</h3>\n\n<p>Erosion &quot;erodes away the boundaries of foreground object&quot;<sup class=\"footnote-ref\"><a href=\"#fn3\" id=\"fnref3\">[3]</a></sup> by sliding a kernel through the image and set a pixel to 1 <strong>only if all the pixels under the kernel is 1</strong>.</p>\n<p>This in effect discard pixels near the boundary and any floating pixels that are not part of a larger blob (which is what the human eye is interested in). Because pixels are eroded, your foreground object will shrink in size.</p>\n<h3 class=\"mume-header\" id=\"dilation\">Dilation</h3>\n\n<p>The opposite of erosion, Dilation sets a pixel to 1 if <strong>at least one pixel under the kernel is 1</strong>, essentially &quot;growing&quot; the foreground object.</p>\n<p>Because of how these operations work, there are a couple of things to note:</p>\n<ol>\n<li>Morphological transformations are usually performed on binary images. Recall that pixel values in binary images are either a full white (i.e 1) or black (i.e 0).</li>\n<li>As per convention, we want to keep our foregound in white and background in black</li>\n<li>Because erosion results in a shrinking foreground and dilation results in a growing foreground, these two operations are also commonly used in combinations, i.e erosion followed by dilation, or vice versa</li>\n</ol>\n<p><img src=\"assets/morphexample.png\" alt></p>\n<p>As we read our image in grayscale mode (<code>flags=0</code>), we obtain a white blackground and a mostly-black foreground. This is illustrated in the subplot titled &quot;Original&quot; above. We begin our preprocessing steps by first binarizing the image (step 1), followed by inverting the colors (step 2) to get a white-on-black image.</p>\n<p>An erosion operation is then performed (step 3). This works by creating our kernel (either through <code>numpy</code> or through <code>opencv</code>&apos;s structuring element) and sliding that kernel across our image to remove white noises in our image.</p>\n<p>The side-effect is that our foreground object has now shrunk in size as it&apos;s boundaries are eroded away. We grow it back by applying a dilation (step 4) and finally show the output as illustrated in the bottom-right pane of the image above.</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token comment\"># read as grayscale</span>\nroi <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;assets/0417s.png&quot;</span><span class=\"token punctuation\">,</span> flags<span class=\"token operator\">=</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># step 1: </span>\n_<span class=\"token punctuation\">,</span> thresh <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>threshold<span class=\"token punctuation\">(</span>roi<span class=\"token punctuation\">,</span> <span class=\"token number\">170</span><span class=\"token punctuation\">,</span> <span class=\"token number\">255</span><span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>THRESH_BINARY<span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># step 2:</span>\ninv <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>bitwise_not<span class=\"token punctuation\">(</span>thresh<span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># step 3 (option 1):</span>\nkernel <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>ones<span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token number\">5</span><span class=\"token punctuation\">,</span><span class=\"token number\">5</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> np<span class=\"token punctuation\">.</span>uint8<span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># step 3 (option 2):</span>\nkernel <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>getStructuringElement<span class=\"token punctuation\">(</span>cv2<span class=\"token punctuation\">.</span>MORPH_ELLIPSE<span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token number\">5</span><span class=\"token punctuation\">,</span> <span class=\"token number\">5</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\neroded <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>erode<span class=\"token punctuation\">(</span>inv<span class=\"token punctuation\">,</span> kernel<span class=\"token punctuation\">,</span> iterations<span class=\"token operator\">=</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># step 4:</span>\ndilated <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>dilate<span class=\"token punctuation\">(</span>eroded<span class=\"token punctuation\">,</span> kernel<span class=\"token punctuation\">,</span> iterations<span class=\"token operator\">=</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span>\ncv2<span class=\"token punctuation\">.</span>imshow<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Transformed&quot;</span><span class=\"token punctuation\">,</span> dilated<span class=\"token punctuation\">)</span>\ncv2<span class=\"token punctuation\">.</span>waitKey<span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span>\n</pre><p>OpenCV provides the three shapes for our kernel:</p>\n<ul>\n<li>Rectangular box: <code>MORPH_RECT</code></li>\n<li>Cross: <code>MORPH_CROSS</code></li>\n<li>Ellipse: <code>MORPH_ELLIPSE</code></li>\n</ul>\n<p>They are fed as the first argument into <code>cv2.getStructuringElement()</code>, with the second being the kernel size (<code>ksize</code>) itself. The third argument is the <em>anchor point</em>, which defaults to the center.</p>\n<h3 class=\"mume-header\" id=\"opening-and-closing\">Opening and Closing</h3>\n\n<p>Another name for <strong>Erosion, followed by Dilation</strong> is the Opening. It is useful in removing noise in our image. The reverse of Opening is Closing, where we first <strong>perform Dilation followed by Erosion</strong>, particularly suited for closing small holes inside foreground objects.</p>\n<p>OpenCV includes the more generic <code>morphologyEx</code> method for all other morphological operations beyond Erosion and Dilation. The function takes an image as the first argument, an operation as the second operation and finally the kernel. Compare how your code will differ between <code>cv2.erode</code> and <code>cv2.dilate</code>, and their respective equivalence in <code>cv2.morphologyEx()</code>:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token keyword\">import</span> cv2\n<span class=\"token keyword\">import</span> numpy <span class=\"token keyword\">as</span> np\n\nimg <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&apos;image.png&apos;</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span>\nkernel <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>ones<span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token number\">5</span><span class=\"token punctuation\">,</span><span class=\"token number\">5</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>np<span class=\"token punctuation\">.</span>uint8<span class=\"token punctuation\">)</span>\nerosion <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>erode<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span>kernel<span class=\"token punctuation\">,</span>iterations <span class=\"token operator\">=</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># Equivalent:</span>\n<span class=\"token comment\"># cv2.morphologyEx(img, cv2.MORPH_ERODE, kernel,iterations=1)</span>\ndilation <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>dilate<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span>kernel<span class=\"token punctuation\">,</span>iterations <span class=\"token operator\">=</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># Equivalent:</span>\n<span class=\"token comment\"># cv2.morphologyEx(img, cv2.MORPH_DILATE, kernel,iterations=1)</span>\nopening <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>morphologyEx<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>MORPH_OPEN<span class=\"token punctuation\">,</span> kernel<span class=\"token punctuation\">)</span>\nclosing <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>morphologyEx<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>MORPH_CLOSE<span class=\"token punctuation\">,</span> kernel<span class=\"token punctuation\">)</span>\n</pre><h3 class=\"mume-header\" id=\"learn-by-building-morphological-transformation\">Learn-by-building: Morphological Transformation</h3>\n\n<p>In the <code>homework</code> directory, you&apos;ll find <code>0417h.png</code>. Your job is to apply what you&apos;ve learned in this lesson to clean up the image. Your output should have these qualities:</p>\n<ol>\n<li>As free of noise as possible (remove the lines, and the red splatted dots across the image)</li>\n<li>If you run <code>findContours()</code> on the output, you should have exactly 4 contours</li>\n<li>Foreground object in white, background in black</li>\n</ol>\n<p><img src=\"homework/0417h.png\" alt></p>\n<p>You are free to pick your strategy, but a reference solution would look like the following:</p>\n<p><img src=\"assets/0417reference.png\" alt></p>\n<h2 class=\"mume-header\" id=\"seven-segment-display\">Seven-segment display</h2>\n\n<p>The seven-segment display (known also as &quot;seven-segment indicator&quot;) is a form of electronic display device for displaying decimal numerals<sup class=\"footnote-ref\"><a href=\"#fn4\" id=\"fnref4\">[4]</a></sup> widely used in digital clocks, electronic meters, calculators and banking security tokens.</p>\n<p><img src=\"assets/sevenseg.png\" alt></p>\n<p>This is relevant because it is the character representation of our digits in each of these security tokens. If we can isolate each digit from each other, we can iteratively predict the &quot;class&quot; of each digit (0 to 9). Specifically, we are going to perform a classification task based on the state of each segment.</p>\n<p>To ease our understanding, let&apos;s refer to each segment using the letters A to G:</p>\n<p><img src=\"assets/sevenseg1.png\" alt></p>\n<p>We can then create a lookup table that match the collective states to the corresponding class:</p>\n<table>\n<thead>\n<tr>\n<th>Class</th>\n<th>a</th>\n<th>b</th>\n<th>c</th>\n<th>d</th>\n<th>e</th>\n<th>f</th>\n<th>g</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>0</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>0</td>\n</tr>\n<tr>\n<td>1</td>\n<td>0</td>\n<td>1</td>\n<td>1</td>\n<td>0</td>\n<td>0</td>\n<td>0</td>\n<td>0</td>\n</tr>\n<tr>\n<td>2</td>\n<td>1</td>\n<td>1</td>\n<td>0</td>\n<td>1</td>\n<td>1</td>\n<td>0</td>\n<td>1</td>\n</tr>\n<tr>\n<td>3</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>0</td>\n<td>0</td>\n<td>1</td>\n</tr>\n<tr>\n<td>4</td>\n<td>0</td>\n<td>1</td>\n<td>1</td>\n<td>0</td>\n<td>0</td>\n<td>1</td>\n<td>1</td>\n</tr>\n<tr>\n<td>5</td>\n<td>1</td>\n<td>0</td>\n<td>1</td>\n<td>1</td>\n<td>0</td>\n<td>1</td>\n<td>1</td>\n</tr>\n<tr>\n<td>6</td>\n<td>1</td>\n<td>0</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n</tr>\n<tr>\n<td>7</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>0</td>\n<td>0</td>\n<td>1</td>\n<td>0</td>\n</tr>\n<tr>\n<td>8</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n</tr>\n<tr>\n<td>9</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>1</td>\n<td>0</td>\n<td>1</td>\n<td>1</td>\n</tr>\n</tbody>\n</table>\n<p>How would we represent such a lookup table in our Python code and how would we use it? The obvious answer to the first question is a dictionary. Notice that <code>DIGITSDICT</code> is just a representation of the &quot;binary state&quot; of each segment. The digit &quot;8&quot; for example correspond to all seven segments being activated, or &quot;on&quot; (state of <code>1</code>).</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">DIGITSDICT <span class=\"token operator\">=</span> <span class=\"token punctuation\">{</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><span class=\"token number\">2</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><span class=\"token number\">3</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><span class=\"token number\">4</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><span class=\"token number\">5</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><span class=\"token number\">6</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><span class=\"token number\">7</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><span class=\"token number\">8</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><span class=\"token number\">9</span>\n<span class=\"token punctuation\">}</span>\n</pre><p>Then, for each digit, we would look at the pixel values in each of the seven segments, and if the majority of pixels are white, we would classify that segment as being in an activated state (<code>1</code>), otherwise in a state of <code>0</code>. As we iterate over the 7 segments, we now have an array of length 7, each element a binary value(<code>0</code> or <code>1</code>).</p>\n<p>We would then find the corresponding value in our dictionary using that array. Your code would resemble the following:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token comment\"># define the rectangle areas corresponding each segment</span>\nsevensegs <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span>\n    <span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span>x0<span class=\"token punctuation\">,</span> y0<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>x1<span class=\"token punctuation\">,</span> y1<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span>x2<span class=\"token punctuation\">,</span> y2<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>x3<span class=\"token punctuation\">,</span> y3<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span> <span class=\"token comment\"># 7 of them</span>\n<span class=\"token punctuation\">]</span>\n\n<span class=\"token comment\"># initialize the state to OFF</span>\non <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span> <span class=\"token operator\">*</span> <span class=\"token number\">7</span> \n\n<span class=\"token comment\"># set each segment to ON / OFF based on majority</span>\n<span class=\"token keyword\">for</span> <span class=\"token punctuation\">(</span>i<span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span>p1x<span class=\"token punctuation\">,</span> p1y<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>p2x<span class=\"token punctuation\">,</span> p2y<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span> <span class=\"token keyword\">in</span> <span class=\"token builtin\">enumerate</span><span class=\"token punctuation\">(</span>sevensegs<span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span>\n    <span class=\"token comment\"># numpy slicing to extract only one region</span>\n    region <span class=\"token operator\">=</span> roi<span class=\"token punctuation\">[</span>p1y<span class=\"token punctuation\">:</span>p2y<span class=\"token punctuation\">,</span> p1x<span class=\"token punctuation\">:</span>p2x<span class=\"token punctuation\">]</span>\n    <span class=\"token comment\"># if majority pixels are white, set state to ON</span>\n    <span class=\"token keyword\">if</span> np<span class=\"token punctuation\">.</span><span class=\"token builtin\">sum</span><span class=\"token punctuation\">(</span>region <span class=\"token operator\">==</span> <span class=\"token number\">255</span><span class=\"token punctuation\">)</span> <span class=\"token operator\">&gt;</span> region<span class=\"token punctuation\">.</span>size <span class=\"token operator\">*</span><span class=\"token number\">0.5</span><span class=\"token punctuation\">:</span>\n        on<span class=\"token punctuation\">[</span>i<span class=\"token punctuation\">]</span> <span class=\"token operator\">=</span> <span class=\"token number\">1</span>\n\n<span class=\"token comment\"># lookup on dictionary</span>\ndigit <span class=\"token operator\">=</span> DIGITSDICT<span class=\"token punctuation\">[</span><span class=\"token builtin\">tuple</span><span class=\"token punctuation\">(</span>on<span class=\"token punctuation\">)</span><span class=\"token punctuation\">]</span> <span class=\"token comment\"># digit is one of 0-9</span>\n</pre><p>There are multiple ways to write a for-loop but it&apos;s important that you are aware of the order in which your for-loop your executing. Referring to our seven-segment illustration below,the first iteration is only concerned with the state of &apos;A&apos; while the second interation handles the state of &apos;B&apos;, and so on.</p>\n<p><img src=\"assets/sevenseg1.png\" alt></p>\n<p>Using <code>enumerate</code>, we obtain an additional counter (<code>i</code>) to our iterable (<code>sevensegs</code>); This is convenient for the purpose of setting states. At the first iteration, the first element is our list is conditionally set to 1 if more than half of the pixels in segment &apos;A&apos; are white. A more detailed example of python&apos;s enumeration is in <code>utils/enumerate.py</code>.</p>\n<h3 class=\"mume-header\" id=\"practical-strategies\">Practical Strategies</h3>\n\n<p>If you are paying close attention to the digit &apos;0&apos; in our LCD display, you will notice that the absence of the &apos;G&apos; segment causes a pretty visible and significant gap. When you test your digit recognition script without special consideration to this attribute, you will find it consistently failing to account for the numbers &quot;0&quot;,&quot;1&quot; and &quot;7&quot;. In fact, you may not even be able to isolate the aforementioned numbers altogether using the <code>findContour</code> operation, because they were treated as two disjointed pieces instead of a whole piece.</p>\n<p>A reasonable strategy to handle this is the Dilation or Closing (Dilation followed by Erosion) operation that you&apos;ve learned earlier.</p>\n<p>Similarly, your ROI may necessitate other pre-processing and the specific tactical solution vary greatly depending on the problem set at hand.</p>\n<p>As I inspect the bounding box we retrieved around the LCD screen, the observation that these bouding boxes often have their digits centered around the bottom half of the display led me to insert an additional step prior to the morphological transformation in the final code solution. The step uses numpy subsetting to trim away the top 20% as well as 20% on each side of the image:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">roi <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;roi.png&quot;</span><span class=\"token punctuation\">,</span> flags<span class=\"token operator\">=</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span>\nRATIO <span class=\"token operator\">=</span> roi<span class=\"token punctuation\">.</span>shape<span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span> <span class=\"token operator\">*</span> <span class=\"token number\">0.2</span>\ntrimmed <span class=\"token operator\">=</span> roi<span class=\"token punctuation\">[</span>\n    <span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>RATIO<span class=\"token punctuation\">)</span> <span class=\"token punctuation\">:</span><span class=\"token punctuation\">,</span> \n    <span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>RATIO<span class=\"token punctuation\">)</span> <span class=\"token punctuation\">:</span> roi<span class=\"token punctuation\">.</span>shape<span class=\"token punctuation\">[</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span> <span class=\"token operator\">-</span> <span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>RATIO<span class=\"token punctuation\">)</span><span class=\"token punctuation\">]</span>\n</pre><p>That said, whenever possible, you want to be cautious of not hand-tuning your problem in a way that is overly specific to the images you have at hand lest risking the solution <strong>only</strong> working on those specific images and not others, a phenomenon fondly termed as &quot;overfitting&quot; in the machine learning community.</p>\n<p>I&apos;ve re-executed the solution code against some sample image sets, once with the &quot;trimming&quot; in-place and then without the trimming, before settling on the decision. As you will see later, the trimming improves our accuracy and is a relatively safe strategy given how every LCD screen regardless of the issuer (bank) has the same asymmetry with more &quot;blank space&quot; at the top half compared to the bottom half.</p>\n<h4 class=\"mume-header\" id=\"contour-properties\">Contour Properties</h4>\n\n<p>Furthermore, in many cases of digit recognition / digit classification you will want to predict the class for each digit in an ordered fashion. Supposed the LCD screen contains the digits &quot;40710382&quot;, our algorithm should correctly isolate these digits, classify them iteratively, but do so from the leftmost digit to the rightmost. Failing to account for this may result in your algorithm correctly classifying each digit, but produce an unreasonable output such as &quot;1740238&quot;.</p>\n<p>There are a few strategies you can employ here. We&apos;ve seen in  <code>contourarea_01.py</code> and <code>contourarea_02.py</code> how contour has attributes that can be retrieved using the <code>contourArea()</code> and <code>arcLength()</code> functions. Inspect the following snippet and it should help jog your memory:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">cnts <span class=\"token operator\">=</span> <span class=\"token builtin\">sorted</span><span class=\"token punctuation\">(</span>cnts<span class=\"token punctuation\">,</span> key<span class=\"token operator\">=</span>cv2<span class=\"token punctuation\">.</span>contourArea<span class=\"token punctuation\">,</span> reverse<span class=\"token operator\">=</span><span class=\"token boolean\">True</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">:</span><span class=\"token number\">9</span><span class=\"token punctuation\">]</span>\n\n<span class=\"token keyword\">for</span> i<span class=\"token punctuation\">,</span> cnt <span class=\"token keyword\">in</span> <span class=\"token builtin\">enumerate</span><span class=\"token punctuation\">(</span>cnts<span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span>\n    cv2<span class=\"token punctuation\">.</span>drawContours<span class=\"token punctuation\">(</span>img_color<span class=\"token punctuation\">,</span> cnts<span class=\"token punctuation\">,</span> i<span class=\"token punctuation\">,</span> BCOLOR<span class=\"token punctuation\">,</span> THICKNESS<span class=\"token punctuation\">)</span>\n    area <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>contourArea<span class=\"token punctuation\">(</span>cnt<span class=\"token punctuation\">)</span>\n    peri <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>arcLength<span class=\"token punctuation\">(</span>cnt<span class=\"token punctuation\">,</span> closed<span class=\"token operator\">=</span><span class=\"token boolean\">True</span><span class=\"token punctuation\">)</span>\n    <span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&quot;Area:</span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>area<span class=\"token punctuation\">}</span></span><span class=\"token string\">; Perimeter: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>peri<span class=\"token punctuation\">}</span></span><span class=\"token string\">&quot;</span></span><span class=\"token punctuation\">)</span>\n</pre><p>Indeed, we&apos;re using countour area as a good indicator to search for our region of interest. When we take this idea a little further, we can further place a constraint on our search criteria. In the following code, we draw a bounding rectangle and for an extra layer of precaution, only takes any bounding boxes that are taller than 20 pixels (step 1).</p>\n<p>Calling <code>boundingRect()</code> on a contour returns 4 values, respectively the x and y coordinate along with the width and height of the contour.</p>\n<p>We then use another property of the contour, its top-left coordinate to determine the logical order of our digits. Specifically, we use the first returned value (<code>cv2.boundingRect(cnt)[0]</code>) since that&apos;s the x value for the top-left coordinate of each region. By sorting against this value, our digits are stored in the Python list in an ordered fashion, determined by their respective coordinate value.</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">digits_cnts <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span><span class=\"token punctuation\">]</span>\ncnts<span class=\"token punctuation\">,</span> _ <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>findContours<span class=\"token punctuation\">(</span>eroded<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>RETR_EXTERNAL<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>CHAIN_APPROX_SIMPLE<span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">for</span> cnt <span class=\"token keyword\">in</span> cnts<span class=\"token punctuation\">:</span>\n    <span class=\"token punctuation\">(</span>x<span class=\"token punctuation\">,</span> y<span class=\"token punctuation\">,</span> w<span class=\"token punctuation\">,</span> h<span class=\"token punctuation\">)</span> <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>boundingRect<span class=\"token punctuation\">(</span>cnt<span class=\"token punctuation\">)</span>\n    <span class=\"token comment\"># step 1</span>\n    <span class=\"token keyword\">if</span> h <span class=\"token operator\">&gt;</span> <span class=\"token number\">20</span><span class=\"token punctuation\">:</span>\n        digits_cnts <span class=\"token operator\">+=</span> <span class=\"token punctuation\">[</span>cnt<span class=\"token punctuation\">]</span>\n<span class=\"token comment\"># step 2</span>\nsorted_digits <span class=\"token operator\">=</span> <span class=\"token builtin\">sorted</span><span class=\"token punctuation\">(</span>digits_cnts<span class=\"token punctuation\">,</span> key<span class=\"token operator\">=</span><span class=\"token keyword\">lambda</span> cnt<span class=\"token punctuation\">:</span> cv2<span class=\"token punctuation\">.</span>boundingRect<span class=\"token punctuation\">(</span>cnt<span class=\"token punctuation\">)</span><span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\n</pre><p>When we put these together, we now have a complete pipeline:<br>\n<img src=\"assets/digitrecflow.png\" alt></p>\n<p>The full solution code is in <code>digit_01.py</code> but the essential parts are as follow:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token keyword\">import</span> cv2\n<span class=\"token keyword\">import</span> numpy <span class=\"token keyword\">as</span> np\n<span class=\"token comment\"># step 1:</span>\nDIGITSDICT <span class=\"token operator\">=</span> <span class=\"token punctuation\">{</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span> <span class=\"token number\">3</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span> <span class=\"token number\">4</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span> <span class=\"token number\">5</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span> <span class=\"token number\">6</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span> <span class=\"token number\">7</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span> <span class=\"token number\">8</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span> <span class=\"token number\">9</span><span class=\"token punctuation\">,</span>\n<span class=\"token punctuation\">}</span>\n\n<span class=\"token comment\"># step 2</span>\nroi <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;inter/ocbc-roi.png&quot;</span><span class=\"token punctuation\">,</span> flags<span class=\"token operator\">=</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span>\n\n<span class=\"token comment\"># step 3</span>\nRATIO <span class=\"token operator\">=</span> roi<span class=\"token punctuation\">.</span>shape<span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span> <span class=\"token operator\">*</span> <span class=\"token number\">0.2</span>\nroi <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>bilateralFilter<span class=\"token punctuation\">(</span>roi<span class=\"token punctuation\">,</span> <span class=\"token number\">5</span><span class=\"token punctuation\">,</span> <span class=\"token number\">30</span><span class=\"token punctuation\">,</span> <span class=\"token number\">60</span><span class=\"token punctuation\">)</span>\ntrimmed <span class=\"token operator\">=</span> roi<span class=\"token punctuation\">[</span><span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>RATIO<span class=\"token punctuation\">)</span> <span class=\"token punctuation\">:</span><span class=\"token punctuation\">,</span> <span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>RATIO<span class=\"token punctuation\">)</span> <span class=\"token punctuation\">:</span> roi<span class=\"token punctuation\">.</span>shape<span class=\"token punctuation\">[</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span> <span class=\"token operator\">-</span> <span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>RATIO<span class=\"token punctuation\">)</span><span class=\"token punctuation\">]</span>\n\n<span class=\"token comment\"># step 4</span>\nedged <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>adaptiveThreshold<span class=\"token punctuation\">(</span>\n    trimmed<span class=\"token punctuation\">,</span> <span class=\"token number\">255</span><span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>ADAPTIVE_THRESH_GAUSSIAN_C<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>THRESH_BINARY_INV<span class=\"token punctuation\">,</span> <span class=\"token number\">5</span><span class=\"token punctuation\">,</span> <span class=\"token number\">5</span>\n<span class=\"token punctuation\">)</span>\n\n<span class=\"token comment\"># step 5</span>\nkernel <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>getStructuringElement<span class=\"token punctuation\">(</span>cv2<span class=\"token punctuation\">.</span>MORPH_RECT<span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token number\">5</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\ndilated <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>dilate<span class=\"token punctuation\">(</span>edged<span class=\"token punctuation\">,</span> kernel<span class=\"token punctuation\">,</span> iterations<span class=\"token operator\">=</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span>\neroded <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>erode<span class=\"token punctuation\">(</span>dilated<span class=\"token punctuation\">,</span> kernel<span class=\"token punctuation\">,</span> iterations<span class=\"token operator\">=</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span>\n\n<span class=\"token comment\"># step 6</span>\ncnts<span class=\"token punctuation\">,</span> _ <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>findContours<span class=\"token punctuation\">(</span>eroded<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>RETR_EXTERNAL<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>CHAIN_APPROX_SIMPLE<span class=\"token punctuation\">)</span>\ndigits_cnts <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span><span class=\"token punctuation\">]</span>\n<span class=\"token keyword\">for</span> cnt <span class=\"token keyword\">in</span> cnts<span class=\"token punctuation\">:</span>\n    <span class=\"token punctuation\">(</span>x<span class=\"token punctuation\">,</span> y<span class=\"token punctuation\">,</span> w<span class=\"token punctuation\">,</span> h<span class=\"token punctuation\">)</span> <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>boundingRect<span class=\"token punctuation\">(</span>cnt<span class=\"token punctuation\">)</span>\n    <span class=\"token keyword\">if</span> h <span class=\"token operator\">&gt;</span> <span class=\"token number\">20</span><span class=\"token punctuation\">:</span>\n        digits_cnts <span class=\"token operator\">+=</span> <span class=\"token punctuation\">[</span>cnt<span class=\"token punctuation\">]</span>\n\n<span class=\"token comment\"># step 7</span>\nsorted_digits <span class=\"token operator\">=</span> <span class=\"token builtin\">sorted</span><span class=\"token punctuation\">(</span>digits_cnts<span class=\"token punctuation\">,</span> key<span class=\"token operator\">=</span><span class=\"token keyword\">lambda</span> cnt<span class=\"token punctuation\">:</span> cv2<span class=\"token punctuation\">.</span>boundingRect<span class=\"token punctuation\">(</span>cnt<span class=\"token punctuation\">)</span><span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\n\n<span class=\"token comment\"># step 8</span>\ndigits <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span><span class=\"token punctuation\">]</span>\n<span class=\"token keyword\">for</span> cnt <span class=\"token keyword\">in</span> sorted_digits<span class=\"token punctuation\">:</span>\n    <span class=\"token comment\"># step 8a</span>\n    <span class=\"token punctuation\">(</span>x<span class=\"token punctuation\">,</span> y<span class=\"token punctuation\">,</span> w<span class=\"token punctuation\">,</span> h<span class=\"token punctuation\">)</span> <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>boundingRect<span class=\"token punctuation\">(</span>cnt<span class=\"token punctuation\">)</span>\n    roi <span class=\"token operator\">=</span> eroded<span class=\"token punctuation\">[</span>y <span class=\"token punctuation\">:</span> y <span class=\"token operator\">+</span> h<span class=\"token punctuation\">,</span> x <span class=\"token punctuation\">:</span> x <span class=\"token operator\">+</span> w<span class=\"token punctuation\">]</span>\n    qW<span class=\"token punctuation\">,</span> qH <span class=\"token operator\">=</span> <span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>w <span class=\"token operator\">*</span> <span class=\"token number\">0.25</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>h <span class=\"token operator\">*</span> <span class=\"token number\">0.15</span><span class=\"token punctuation\">)</span>\n    fractionH<span class=\"token punctuation\">,</span> halfH<span class=\"token punctuation\">,</span> fractionW <span class=\"token operator\">=</span> <span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>h <span class=\"token operator\">*</span> <span class=\"token number\">0.05</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>h <span class=\"token operator\">*</span> <span class=\"token number\">0.5</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>w <span class=\"token operator\">*</span> <span class=\"token number\">0.25</span><span class=\"token punctuation\">)</span>\n\n    <span class=\"token comment\"># step 8b</span>\n    sevensegs <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span>\n        <span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>w<span class=\"token punctuation\">,</span> qH<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>  <span class=\"token comment\"># a (top bar)</span>\n        <span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span>w <span class=\"token operator\">-</span> qW<span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>w<span class=\"token punctuation\">,</span> halfH<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>  <span class=\"token comment\"># b (upper right)</span>\n        <span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span>w <span class=\"token operator\">-</span> qW<span class=\"token punctuation\">,</span> halfH<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>w<span class=\"token punctuation\">,</span> h<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>  <span class=\"token comment\"># c (lower right)</span>\n        <span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> h <span class=\"token operator\">-</span> qH<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>w<span class=\"token punctuation\">,</span> h<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>  <span class=\"token comment\"># d (lower bar)</span>\n        <span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> halfH<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>qW<span class=\"token punctuation\">,</span> h<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>  <span class=\"token comment\"># e (lower left)</span>\n        <span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>qW<span class=\"token punctuation\">,</span> halfH<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>  <span class=\"token comment\"># f (upper left)</span>\n        <span class=\"token comment\"># ((0, halfH - fractionH), (w, halfH + fractionH)) # center</span>\n        <span class=\"token punctuation\">(</span>\n            <span class=\"token punctuation\">(</span><span class=\"token number\">0</span> <span class=\"token operator\">+</span> fractionW<span class=\"token punctuation\">,</span> halfH <span class=\"token operator\">-</span> fractionH<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>\n            <span class=\"token punctuation\">(</span>w <span class=\"token operator\">-</span> fractionW<span class=\"token punctuation\">,</span> halfH <span class=\"token operator\">+</span> fractionH<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>\n        <span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span>  <span class=\"token comment\"># center</span>\n    <span class=\"token punctuation\">]</span>\n\n    <span class=\"token comment\"># step 8c</span>\n    on <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span> <span class=\"token operator\">*</span> <span class=\"token number\">7</span>\n    <span class=\"token keyword\">for</span> <span class=\"token punctuation\">(</span>i<span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span>p1x<span class=\"token punctuation\">,</span> p1y<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>p2x<span class=\"token punctuation\">,</span> p2y<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span> <span class=\"token keyword\">in</span> <span class=\"token builtin\">enumerate</span><span class=\"token punctuation\">(</span>sevensegs<span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span>\n        region <span class=\"token operator\">=</span> roi<span class=\"token punctuation\">[</span>p1y<span class=\"token punctuation\">:</span>p2y<span class=\"token punctuation\">,</span> p1x<span class=\"token punctuation\">:</span>p2x<span class=\"token punctuation\">]</span>\n        <span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span>\n            <span class=\"token string-interpolation\"><span class=\"token string\">f&quot;</span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>i<span class=\"token punctuation\">}</span></span><span class=\"token string\">: Sum of 1: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">sum</span><span class=\"token punctuation\">(</span>region <span class=\"token operator\">==</span> <span class=\"token number\">255</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">}</span></span><span class=\"token string\">, Sum of 0: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">sum</span><span class=\"token punctuation\">(</span>region <span class=\"token operator\">==</span> <span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">}</span></span><span class=\"token string\">, Shape: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>region<span class=\"token punctuation\">.</span>shape<span class=\"token punctuation\">}</span></span><span class=\"token string\">, Size: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>region<span class=\"token punctuation\">.</span>size<span class=\"token punctuation\">}</span></span><span class=\"token string\">&quot;</span></span>\n        <span class=\"token punctuation\">)</span>\n        <span class=\"token keyword\">if</span> np<span class=\"token punctuation\">.</span><span class=\"token builtin\">sum</span><span class=\"token punctuation\">(</span>region <span class=\"token operator\">==</span> <span class=\"token number\">255</span><span class=\"token punctuation\">)</span> <span class=\"token operator\">&gt;</span> region<span class=\"token punctuation\">.</span>size <span class=\"token operator\">*</span> <span class=\"token number\">0.5</span><span class=\"token punctuation\">:</span>\n            on<span class=\"token punctuation\">[</span>i<span class=\"token punctuation\">]</span> <span class=\"token operator\">=</span> <span class=\"token number\">1</span>\n        <span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&quot;State of ON: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>on<span class=\"token punctuation\">}</span></span><span class=\"token string\">&quot;</span></span><span class=\"token punctuation\">)</span>\n    <span class=\"token comment\"># step 8d</span>\n    digit <span class=\"token operator\">=</span> DIGITSDICT<span class=\"token punctuation\">[</span><span class=\"token builtin\">tuple</span><span class=\"token punctuation\">(</span>on<span class=\"token punctuation\">)</span><span class=\"token punctuation\">]</span>\n    <span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&quot;Digit is: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>digit<span class=\"token punctuation\">}</span></span><span class=\"token string\">&quot;</span></span><span class=\"token punctuation\">)</span>\n    digits <span class=\"token operator\">+=</span> <span class=\"token punctuation\">[</span>digit<span class=\"token punctuation\">]</span>\n    <span class=\"token comment\"># step 9</span>\n    cv2<span class=\"token punctuation\">.</span>rectangle<span class=\"token punctuation\">(</span>canvas<span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>x<span class=\"token punctuation\">,</span> y<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>x <span class=\"token operator\">+</span> w<span class=\"token punctuation\">,</span> y <span class=\"token operator\">+</span> h<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> CYAN<span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span>\n    cv2<span class=\"token punctuation\">.</span>putText<span class=\"token punctuation\">(</span>canvas<span class=\"token punctuation\">,</span> <span class=\"token builtin\">str</span><span class=\"token punctuation\">(</span>digit<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>x <span class=\"token operator\">-</span> <span class=\"token number\">5</span><span class=\"token punctuation\">,</span> y <span class=\"token operator\">+</span> <span class=\"token number\">6</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> FONT<span class=\"token punctuation\">,</span> <span class=\"token number\">0.3</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span>\n    cv2<span class=\"token punctuation\">.</span>imshow<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Digit&quot;</span><span class=\"token punctuation\">,</span> canvas<span class=\"token punctuation\">)</span>\n    cv2<span class=\"token punctuation\">.</span>waitKey<span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&quot;Digits on the token are: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>digits<span class=\"token punctuation\">}</span></span><span class=\"token string\">&quot;</span></span><span class=\"token punctuation\">)</span>\n</pre><ul>\n<li>Step 1: Initialize the lookup dictionary</li>\n<li>Step 2: Read our ROI image using OpenCV</li>\n<li>Step 3: Noise reduction and trim away asymmetrical white space in our ROI</li>\n<li>Step 4: Binarize our image using adaptive thresholding</li>\n<li>Step 5: Morphological transformation to remove noise and fill the small holes in our digit</li>\n<li>Step 6: Find contours in our image with a height greater than 20px</li>\n<li>Step 7: Sort the contours in-place, using the x value of their coordinates (hence, left to right)</li>\n<li>Step 8\n<ul>\n<li>Step 8a: Create rectangle bounding box on each digit, and some convenience units that we later use to slice the seven segments. Notice that these convenience units are not hard-coded values, but are proportional to the Height (<code>h</code>) of our rectangular box</li>\n<li>Step 8b: Slice the seven segments; The first segment (&quot;A&quot;) is from point (0,0) to (w, <code>int(h * 0.15)</code>); This segment is <code>w</code> in width and 15% the height of the full digit contour, starting from position (0, 0)</li>\n<li>Step 8c: Initialize the state to <code>0</code> for each of the 7 segments, then conditionally set regions with more white than black pixels to <code>1</code></li>\n<li>Step 8d: Once all 7 states have been set, perform lookup against the digit dictionary created in step 1; Append the value to the <code>digits</code> list created at the beginning of step 8</li>\n</ul>\n</li>\n<li>Step 9: Draw rectangle and add predicted text for each bounding box. Finally, use a print statement to print the <code>digits</code> list.</li>\n</ul>\n<h1 class=\"mume-header\" id=\"references\">References</h1>\n\n<hr class=\"footnotes-sep\">\n<section class=\"footnotes\">\n<ol class=\"footnotes-list\">\n<li id=\"fn1\" class=\"footnote-item\"><p>LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278&#x2013;2324 <a href=\"#fnref1\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn2\" class=\"footnote-item\"><p>Saliency map, Wikipedia <a href=\"#fnref2\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn3\" class=\"footnote-item\"><p>Morphological Transformations, OpenCV Documentation <a href=\"#fnref3\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn4\" class=\"footnote-item\"><p>Seven-segment display, Wikipedia <a href=\"#fnref4\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n</ol>\n</section>\n\n      </div>\n      <div class=\"md-sidebar-toc\"><ul>\n<li><a href=\"#background\">Background</a>\n<ul>\n<li><a href=\"#what-about-deep-learning\">What about Deep Learning?</a></li>\n<li><a href=\"#region-of-interest\">Region of Interest</a>\n<ul>\n<li><a href=\"#selecting-region-of-interest\">Selecting Region of Interest</a></li>\n<li><a href=\"#arc-length-and-area-size\">Arc Length and Area Size</a>\n<ul>\n<li><a href=\"#dive-deeper-roi\">Dive Deeper: ROI</a></li>\n</ul>\n</li>\n<li><a href=\"#roi-extraction\">ROI extraction</a></li>\n</ul>\n</li>\n<li><a href=\"#morphological-transformations\">Morphological Transformations</a>\n<ul>\n<li><a href=\"#erosion\">Erosion</a></li>\n<li><a href=\"#dilation\">Dilation</a></li>\n<li><a href=\"#opening-and-closing\">Opening and Closing</a></li>\n<li><a href=\"#learn-by-building-morphological-transformation\">Learn-by-building: Morphological Transformation</a></li>\n</ul>\n</li>\n<li><a href=\"#seven-segment-display\">Seven-segment display</a>\n<ul>\n<li><a href=\"#practical-strategies\">Practical Strategies</a>\n<ul>\n<li><a href=\"#contour-properties\">Contour Properties</a></li>\n</ul>\n</li>\n</ul>\n</li>\n</ul>\n</li>\n<li><a href=\"#references\">References</a></li>\n</ul>\n</div>\n      <a id=\"sidebar-toc-btn\">&#x2261;</a>\n    \n    \n    \n    \n    \n    \n    \n    \n<script>\n\nvar sidebarTOCBtn = document.getElementById('sidebar-toc-btn')\nsidebarTOCBtn.addEventListener('click', function(event) {\n  event.stopPropagation()\n  if (document.body.hasAttribute('html-show-sidebar-toc')) {\n    document.body.removeAttribute('html-show-sidebar-toc')\n  } else {\n    document.body.setAttribute('html-show-sidebar-toc', true)\n  }\n})\n</script>\n      \n  \n    </body></html>"
  },
  {
    "path": "digitrecognition/digitrec.md",
    "content": "# Background\nIn Chapter 4: Digit Recognition, we'll add a few new techniques to our image processing toolset by attempting to build a digit recognition pipeline from start to finish. Throughout the exercise, we will get to practice the image preprocessing tricks we've picked up from previous chapters:\n- Image manipulations such as resizing, cropping, rotation, color conversion  \n- Blurring and sharpening operations\n- Thresholding and Edge Detection\n- Contour approximation\n\nNew method and strategies that you'll be learning include:\n- Drawing operations (rectangles, text) on our image  \n- Region of interest and bounding rectangles\n- Morphological transformations\n- The Seven-Segment Display \n\n## What about Deep Learning?\nTo be clear, specialised deep learning libraries that have sprung out in recent years are a lot more robust in their approach. By utilizing machine learning principles (cost function, gradient descent etc), these specialised libraries can handle highly complex object recognition and OCR (optical character recognition) tasks at the cost of brute computing power.\n\nThe overarching motivation of this free course however, was to make clear to beginners what constitutes artificial intelligence, and to illustrate the principle benefits of machine learning. I try to achieve that by demonstrating -- over multiple chapters of this course -- how computer visions were traditionally, or rather \"classically\", performed prior to the emergence of deep learning. \n\nBy learning the classical approaches to computer vision, the student (you) can compare the effort it takes to hand-tuning parameters and this adds a new dimension of appreciation towards self-learning methods that we'll discuss in the near future.\n\n## Region of Interest\nDo a quick google search on \"digit recognition\" or \"digit classification\" and it's hard to find an introductory deep learning course that **doesn't use** the famous MNIST (Modified National Institute of Standards and Technology)[^1] database. This is a handwritten digit database that has long become the _de facto_ in pretty much any machine learning tutorials:\n\n![](assets/mnist.png)\n\nBut I'd argue, that for a budding computer vision developer, your learning objectives are better served by taking a different approach. \n\nBy choosing real life images, you are confronted with a few more key challenges that are not present from using a well-curated database such as MNIST. These challenges present new opportunities to learn about key concepts such as **region of interest**, and **morphological operations**, that you will come to rely upon greatly in the future. \n\nFirst, take a look at 4 real-life pictures of security tokens issued by banks and institutional agencies (left-to-right: Bank Central Asia, DBS, OCBC Bank, OneKey for Singapore Government e-services): \n\n![](assets/securitytokens.png)\n\nNotice how noisy these images are, as each image is shot with a different background, different lighting conditions, each token is of a different size and shape, and the different colors in each security token etc. \n\nYour task, as a computer vision developer, is to develop a pipeline that, in each phase, take you closer to the goal. Roughly speaking, given the above task, we would formulate a pipeline that looks like the following:\n1. Preprocessing, noise reduction\n2. Contour approximation\n3. Find region of interest (ROI), that is the area of the LCD display in each of these pictures\n4. Extract ROI for further preprocessing, discarding the rest of the image\n5. Isolate each digit from the ROI\n6. Iteratively classify each digit in the image\n7. Combine the per-digit classification to a final string (\"output\")\n\nIn practice, step (1) and (2) above is the \"application\" of the methods you've learned in previous chapters of this series. As we'll soon observe, we will use a combination of blurring operations and edge detection to draw our contours. Among the contours, one of them would be the LCD display containing the digits to be classified. That is our **Region of Interest**.\n\n![](assets/croproi.gif)\n\n### Selecting Region of Interest\nThe GIF above demonstrates the code in `roi_01.py` but essentially it shows the `selectROI` method in action. You'll commonly combined the `selectROI` method with a either a slicing operation to crop your region of interest, or a drawing operation to call attention to the specific region of the image.\n\n```py\nx,y,w,h = cv2.selectROI(\"Region of interest\", img)\ncropped = img[y:y+h, x:x+w]\n# draw rectangle \ncv2.rectangle(img_color, (x,y), (x+w,y+h), (255,0,0), 2)\n```\n\nIn most cases, it simply wouldn't be realistic to render an image before manually specifying our region of interest. We'll need this operation to be as close to automatic as possible. But how exactly? That depends greatly on the specific problem set. \n\nIn some cases, the obvious choice of strategy would be simply shape recognition, say by counting the number of vertices from each contour. The following code is an example implementation of that:\n\n```py\n# cnt = contour\nperi = cv2.arcLength(cnt, True)\n# contour approximation\ncnt_appro = cv2.approxPolyDP(cnt, 0.03 * peri, True)\nif len(cnt_approx) == 3:\n    est_shape = 'triangle'\n...\nelif len(cnt_approx) == 5:\n    est_shape = 'pentagon'\n...\n```\n\nIn other cases, you may employ a strategy that try to match contour based on Hu moments (which we'll study in details in future chapters). \n\nOther methods may involve a saliency map, or a visual attention map, for ROI extraction. These methods create a new representation of the original image where each pixel's **unique quality** are amplified or emphasized. One example implementation on Wikipedia[^2] demonstrates how straightforward this concept really is:\n\n$$SALS(I_K) = \\sum^{N}_{i=1}|I_k-I_i|$$\n\nAs you add new tools and strategies to your computer vision toolbox, you will pick up new approaches to ROI extraction. It is an interesting field of research that has been gaining a lot in popularity with the emergence of deep learning.\n\nAs for the images of bank security tokens, can you think of an approach that may be a good fit? Our region of interest is the LCD screen at the top of the button pad on each device, and they all seem to be rather consistent in shape and size. Give it some thought and read on to find out.\n\n### Arc Length and Area Size\nI've hinted at the shape and size being a factor, so maybe that would be a good starting point. The good news is the OpenCV made this incredibly easy through the `contourArea()` and `arcLength()` function. \n\nThe following snippet of code, lifted from `contourarea_01.py`, finds all contours and sort them by area size in descending order before storing the first 10 in `cnts`:\n```py\ncnts, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n# sort contours by contourArea, and get the first 10\ncnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:9]\n```\n\nWe can also obtain the contour area and parameter iteratively in a for-loop, like the following:\n```py\ncnts, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\nfor i in range(len(cnts)):\n    area = cv2.contourArea(cnts[i])\n    peri = cv2.arcLength(cnts[i], closed=True)\n    print(f'Area:{area}, Perimeter:{peri}')\n```\n\nIn effect, we're looping through each contour that the `findContours()` operation found, and computing two values each time, `area` and `peri`. \n\nNote that the contour perimeter is also known as the arc length. The second argument `closed` specify whether the shape is a closed contour (`True`) or just a curve (`closed=False`). \n\nExecute `contourarea_01.py` and observe how each contour is displayed, from the one with the largest area to the one with the least, for a total of 10 contours. As you run the script on different pictures of bank security tokens, you see that it does a reliable job at finding the contours, sorting them, and returning our LCD display screen as the first in the list. This makes sense, because visually it is apparent that the LCD display occupy the largest area among other closed shapes in our picture.\n\n#### Dive Deeper: ROI\n1. Use `assets/dbs.jpg` instead of `assets/ocbc.jpg` in `contourarea_01.py`. Were you able to extract the region of interest (LCD Display) successfully without any changes to the script?\n\n2. Could we have successfully extract our region of interest have we used `arcLength` in our strategy?\n\n3. Supposed we only wanted to extract the region of interest and not the rest, which line of code would you change? Reflect the change in the code and execute it to confirm that you have performed this exercise correctly. \n\n4. Supposed we wanted the contours sorted according to their respective area, from the smallest to the largest, which line of code would you change? Reflect the change in the code and execute it to confirm that you have performed this exercise correctly.\n\nWhile working through the exercises above, you may find it helpful to also draw the text describing the area size and perimeter next to each contour. I've shown you how this can be done in `contourarea_02.py` but the essential addition we make to the earlier code is the two calls to `putText()`:\n\n```py\nPURPLE = (75, 0, 130)\nTHICKNESS = 1\nFONT = cv2.FONT_HERSHEY_SIMPLEX\ncv2.putText(img_color, \"Area:\" + str(area), (x, y - 15), FONT, 0.4, PURPLE,THICKNESS)\ncv2.putText(img_color, \"Perimeter:\" + str(peri), (x, y - 5), FONT, 0.4,PURPLE, THICKNESS)\n```\n\n![](assets/textcontour.png)\n\n### ROI extraction\nWith these foundations, we are now ready to write a simple utility script that:\n1. Find our region of interest\n2. Crop ROI into a new image\n3. Save it into an folder named `/inter` (intermediary) for the actual digit recognition later\n\nMuch of what you need to do has already been presented so far, but the core pieces are, lifted from `roi_02.py` the following few lines of code:\n\n```py\nimg = cv2.imread(...)\nblurred = cv2.GaussianBlur(img, (7, 7), 0)\nedged = cv2.Canny(blurred, 130, 150, 255)\ncnts, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\ncnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:1]\n\nx, y, w, h = cv2.boundingRect(cnts[0])\nroi = img[y : y + h, x : x + w]\ncv2.imwrite(\"roi.png\", roi)\n```\n\nThe `roi_02.py` utility script uses the `argparse` library so user can specify a file path with a flag `-p` (or `--path`) like such:\n```bash\npython roi_02.py -p assets/ocbc.jpg\n# equivalent:\npython roi_02.py --path assets/ocbc.jpg\n```\n\nIf the user do not specify a file path using the `-p` flag, the default value would be `assets/ocbc.jpg`. If you wish to change this, edit `roi_02.py` and specify a different value for the `default` parameter.\n\n```py\nparser = argparse.ArgumentParser()\nparser.add_argument(\"-p\", \"--path\", default=\"assets/ocbc.jpg\")\n```\n\nYou should run this exercise using `dbs.jpg`, `ocbc2.jpg`, or `onekey.jpg` at least once. Execute the script and check the `inter` folder to confirm that the ROI has been saved. When you're done, you are ready to move on to the next phase of the digit recognition pipeline. \n\n## Morphological Transformations\nOnce the region of interest is obtained, we now have an image that may still contain noises. This is especially the case when our ROI is obtained by means of thresholding methods, since you can expect some \"non-features\" (noises) to also be included in the resulting image. \n\nTo account for these imperfections, we will now perform a series of operations on our image. We'll learn what they are formally, but let's begin by seeing what is it that they _offer_ to our image processing pipeline. I've included a picture with some random noise, as follow:\n\n![](assets/0417s.png)\n\nThe digit \"0417\" is clearly discernible to the human eye despite the presence of noise. However, consider the perspective of a global thresholding operation; These pixel values are \"noise\" to us but a computer has no such notion of which pixel values are meaningful and what others are not. A thresold value such as the global mean will take all values into account indiscriminately. A contour finding operation will, instead of 4, return thousands of tiny round segments (they may be tiny, but they are completely valid contours). \n\nAn image processing pipeline that fail to account for these may result in sub-optimal performance or, very often, completely undesired results. \n\nEnter two of the most fundamental morphological transformations: **erosion** and **dilation**. \n\n### Erosion\nErosion \"erodes away the boundaries of foreground object\"[^3] by sliding a kernel through the image and set a pixel to 1 **only if all the pixels under the kernel is 1**.\n\nThis in effect discard pixels near the boundary and any floating pixels that are not part of a larger blob (which is what the human eye is interested in). Because pixels are eroded, your foreground object will shrink in size.\n\n### Dilation\nThe opposite of erosion, Dilation sets a pixel to 1 if **at least one pixel under the kernel is 1**, essentially \"growing\" the foreground object. \n\nBecause of how these operations work, there are a couple of things to note:\n1. Morphological transformations are usually performed on binary images. Recall that pixel values in binary images are either a full white (i.e 1) or black (i.e 0). \n2. As per convention, we want to keep our foregound in white and background in black  \n3. Because erosion results in a shrinking foreground and dilation results in a growing foreground, these two operations are also commonly used in combinations, i.e erosion followed by dilation, or vice versa\n\n![](assets/morphexample.png)\n\nThe full code solution is in `morphological_02.py`.\n\nAs we read our image in grayscale mode (`flags=0`), we obtain a white blackground and a mostly-black foreground. This is illustrated in the subplot titled \"Original\" above. We begin our preprocessing steps by first binarizing the image (step 1), followed by inverting the colors (step 2) to get a white-on-black image. \n\nAn erosion operation is then performed (step 3). This works by creating our kernel (either through `numpy` or through `opencv`'s structuring element) and sliding that kernel across our image to remove white noises in our image. \n\nThe side-effect is that our foreground object has now shrunk in size as it's boundaries are eroded away. We grow it back by applying a dilation (step 4) and finally show the output as illustrated in the bottom-right pane of the image above.\n\n```py\n# read as grayscale\nroi = cv2.imread(\"assets/0417s.png\", flags=0)\n# step 1: \n_, thresh = cv2.threshold(roi, 170, 255, cv2.THRESH_BINARY)\n# step 2:\ninv = cv2.bitwise_not(thresh)\n# step 3 (option 1):\nkernel = np.ones((5,5), np.uint8)\n# step 3 (option 2):\nkernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))\neroded = cv2.erode(inv, kernel, iterations=1)\n# step 4:\ndilated = cv2.dilate(eroded, kernel, iterations=1)\ncv2.imshow(\"Transformed\", dilated)\ncv2.waitKey(0)\n```\n\nOpenCV provides the three shapes for our kernel:\n- Rectangular box: `MORPH_RECT`\n- Cross: `MORPH_CROSS`\n- Ellipse: `MORPH_ELLIPSE`\n\nThey are fed as the first argument into `cv2.getStructuringElement()`, with the second being the kernel size (`ksize`) itself. The third argument is the _anchor point_, which defaults to the center.\n\n### Opening and Closing\nAnother name for **Erosion, followed by Dilation** is the Opening. It is useful in removing noise in our image. The reverse of Opening is Closing, where we first **perform Dilation followed by Erosion**, particularly suited for closing small holes inside foreground objects.\n\nOpenCV includes the more generic `morphologyEx` method for all other morphological operations beyond Erosion and Dilation. The function takes an image as the first argument, an operation as the second operation and finally the kernel. Compare how your code will differ between `cv2.erode` and `cv2.dilate`, and their respective equivalence in `cv2.morphologyEx()`:\n\n```py\nimport cv2\nimport numpy as np\n\nimg = cv2.imread('image.png',0)\nkernel = np.ones((5,5),np.uint8)\nerosion = cv2.erode(img,kernel,iterations = 1)\n# Equivalent:\n# cv2.morphologyEx(img, cv2.MORPH_ERODE, kernel,iterations=1)\ndilation = cv2.dilate(img,kernel,iterations = 1)\n# Equivalent:\n# cv2.morphologyEx(img, cv2.MORPH_DILATE, kernel,iterations=1)\nopening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)\nclosing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)\n```\n\n### Learn-by-building: Morphological Transformation\nIn the `homework` directory, you'll find `0417h.png`. Your job is to apply what you've learned in this lesson to clean up the image. Your output should have these qualities:\n1. As free of noise as possible (remove the lines, and the red splatted dots across the image)\n2. If you run `findContours()` on the output, you should have exactly 4 contours\n3. Foreground object in white, background in black\n\n![](homework/0417h.png)\n\nYou are free to pick your strategy, but a reference solution would look like the following:\n\n![](assets/0417reference.png)\n\n## Seven-segment display\nThe seven-segment display (known also as \"seven-segment indicator\") is a form of electronic display device for displaying decimal numerals[^4] widely used in digital clocks, electronic meters, calculators and banking security tokens.\n\n![](assets/sevenseg.png)\n\nThis is relevant because it is the character representation of our digits in each of these security tokens. If we can isolate each digit from each other, we can iteratively predict the \"class\" of each digit (0 to 9). Specifically, we are going to perform a classification task based on the state of each segment. \n\nTo ease our understanding, let's refer to each segment using the letters A to G:\n\n![](assets/sevenseg1.png)\n\nWe can then create a lookup table that match the collective states to the corresponding class:\n\n| Class \t| a \t| b \t| c \t| d \t| e \t| f \t| g \t|\n|-------\t|---\t|---\t|---\t|---\t|---\t|---\t|---\t|\n| 0 \t| 1 \t| 1 \t| 1 \t| 1 \t| 1 \t| 1 \t| 0 \t|\n| 1 \t| 0 \t| 1 \t| 1 \t| 0 \t| 0 \t| 0 \t| 0 \t|\n| 2 \t| 1 \t| 1 \t| 0 \t| 1 \t| 1 \t| 0 \t| 1 \t|\n| 3 \t| 1 \t| 1 \t| 1 \t| 1 \t| 0 \t| 0 \t| 1 \t|\n| 4 \t| 0 \t| 1 \t| 1 \t| 0 \t| 0 \t| 1 \t| 1 \t|\n| 5 \t| 1 \t| 0 \t| 1 \t| 1 \t| 0 \t| 1 \t| 1 \t|\n| 6 \t| 1 \t| 0 \t| 1 \t| 1 \t| 1 \t| 1 \t| 1 \t|\n| 7 \t| 1 \t| 1 \t| 1 \t| 0 \t| 0 \t| 1 \t| 0 \t|\n| 8 \t| 1 \t| 1 \t| 1 \t| 1 \t| 1 \t| 1 \t| 1 \t|\n| 9 \t| 1 \t| 1 \t| 1 \t| 1 \t| 0 \t| 1 \t| 1 \t|\n\n\nHow would we represent such a lookup table in our Python code and how would we use it? The obvious answer to the first question is a dictionary. Notice that `DIGITSDICT` is just a representation of the \"binary state\" of each segment. The digit \"8\" for example correspond to all seven segments being activated, or \"on\" (state of `1`). \n\n```py\nDIGITSDICT = {\n    (1,1,1,1,1,1,0):0,\n    (0,1,1,0,0,0,0):1,\n    (1,1,0,1,1,0,1):2,\n    (1,1,1,1,0,0,1):3,\n    (0,1,1,0,0,1,1):4,\n    (1,0,1,1,0,1,1):5,\n    (1,0,1,1,1,1,1):6,\n    (1,1,1,0,0,1,0):7,\n    (1,1,1,1,1,1,1):8,\n    (1,1,1,1,0,1,1):9\n}\n```\n\nThen, for each digit, we would look at the pixel values in each of the seven segments, and if the majority of pixels are white, we would classify that segment as being in an activated state (`1`), otherwise in a state of `0`. As we iterate over the 7 segments, we now have an array of length 7, each element a binary value(`0` or `1`). \n\nWe would then find the corresponding value in our dictionary using that array. Your code would resemble the following:\n\n```py\n# define the rectangle areas corresponding each segment\nsevensegs = [\n    ((x0, y0), (x1, y1)),\n    ((x2, y2), (x3, y3)),\n    ... # 7 of them\n]\n\n# initialize the state to OFF\non = [0] * 7 \n\n# set each segment to ON / OFF based on majority\nfor (i, ((p1x, p1y), (p2x, p2y))) in enumerate(sevensegs):\n    # numpy slicing to extract only one region\n    region = roi[p1y:p2y, p1x:p2x]\n    # if majority pixels are white, set state to ON\n    if np.sum(region == 255) > region.size *0.5:\n        on[i] = 1\n\n# lookup on dictionary\ndigit = DIGITSDICT[tuple(on)] # digit is one of 0-9\n```\n\nThere are multiple ways to write a for-loop but it's important that you are aware of the order in which your for-loop your executing. Referring to our seven-segment illustration below,the first iteration is only concerned with the state of 'A' while the second interation handles the state of 'B', and so on. \n\n![](assets/sevenseg1.png)\n\nUsing `enumerate`, we obtain an additional counter (`i`) to our iterable (`sevensegs`); This is convenient for the purpose of setting states. At the first iteration, the first element is our list is conditionally set to 1 if more than half of the pixels in segment 'A' are white. A more detailed example of python's enumeration is in `utils/enumerate.py`.\n\n### Practical Strategies\nIf you are paying close attention to the digit '0' in our LCD display, you will notice that the absence of the 'G' segment causes a pretty visible and significant gap. When you test your digit recognition script without special consideration to this attribute, you will find it consistently failing to account for the numbers \"0\",\"1\" and \"7\". In fact, you may not even be able to isolate the aforementioned numbers altogether using the `findContour` operation, because they were treated as two disjointed pieces instead of a whole piece. \n\nA reasonable strategy to handle this is the Dilation or Closing (Dilation followed by Erosion) operation that you've learned earlier. \n\nSimilarly, your ROI may necessitate other pre-processing and the specific tactical solution vary greatly depending on the problem set at hand. \n\nAs I inspect the bounding box we retrieved around the LCD screen, the observation that these bouding boxes often have their digits centered around the bottom half of the display led me to insert an additional step prior to the morphological transformation in the final code solution. The step uses numpy subsetting to trim away the top 20% as well as 20% on each side of the image:\n\n```py\nroi = cv2.imread(\"roi.png\", flags=0)\nRATIO = roi.shape[0] * 0.2\ntrimmed = roi[\n    int(RATIO) :, \n    int(RATIO) : roi.shape[1] - int(RATIO)]\n```\n\nThat said, whenever possible, you want to be cautious of not hand-tuning your problem in a way that is overly specific to the images you have at hand lest risking the solution **only** working on those specific images and not others, a phenomenon fondly termed as \"overfitting\" in the machine learning community.\n\nI've re-executed the solution code against some sample image sets, once with the \"trimming\" in-place and then without the trimming, before settling on the decision. As you will see later, the trimming improves our accuracy and is a relatively safe strategy given how every LCD screen regardless of the issuer (bank) has the same asymmetry with more \"blank space\" at the top half compared to the bottom half. \n\n#### Contour Properties\nFurthermore, in many cases of digit recognition / digit classification you will want to predict the class for each digit in an ordered fashion. Supposed the LCD screen contains the digits \"40710382\", our algorithm should correctly isolate these digits, classify them iteratively, but do so from the leftmost digit to the rightmost. Failing to account for this may result in your algorithm correctly classifying each digit, but produce an unreasonable output such as \"1740238\". \n\nThere are a few strategies you can employ here. We've seen in  `contourarea_01.py` and `contourarea_02.py` how contour has attributes that can be retrieved using the `contourArea()` and `arcLength()` functions. Inspect the following snippet and it should help jog your memory:\n\n```py\ncnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:9]\n\nfor i, cnt in enumerate(cnts):\n    cv2.drawContours(img_color, cnts, i, BCOLOR, THICKNESS)\n    area = cv2.contourArea(cnt)\n    peri = cv2.arcLength(cnt, closed=True)\n    print(f\"Area:{area}; Perimeter: {peri}\")\n```\n\nIndeed, we're using countour area as a good indicator to search for our region of interest. When we take this idea a little further, we can further place a constraint on our search criteria. In the following code, we draw a bounding rectangle and for an extra layer of precaution, only takes any bounding boxes that are taller than 20 pixels (step 1).\n\nCalling `boundingRect()` on a contour returns 4 values, respectively the x and y coordinate along with the width and height of the contour. \n\nWe then use another property of the contour, its top-left coordinate to determine the logical order of our digits. Specifically, we use the first returned value (`cv2.boundingRect(cnt)[0]`) since that's the x value for the top-left coordinate of each region. By sorting against this value, our digits are stored in the Python list in an ordered fashion, determined by their respective coordinate value. \n\n```py\ndigits_cnts = []\ncnts, _ = cv2.findContours(eroded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\nfor cnt in cnts:\n    (x, y, w, h) = cv2.boundingRect(cnt)\n    # step 1\n    if h > 20:\n        digits_cnts += [cnt]\n# step 2\nsorted_digits = sorted(digits_cnts, key=lambda cnt: cv2.boundingRect(cnt)[0])\n```\n\nWhen we put these together, we now have a complete pipeline:  \n![](assets/digitrecflow.png)\n\nThe full solution code is in `digit_01.py` but the essential parts are as follow:\n\n```py\nimport cv2\nimport numpy as np\n# step 1:\nDIGITSDICT = {\n    (1, 1, 1, 1, 1, 1, 0): 0,\n    (0, 1, 1, 0, 0, 0, 0): 1,\n    (1, 1, 0, 1, 1, 0, 1): 2,\n    (1, 1, 1, 1, 0, 0, 1): 3,\n    (0, 1, 1, 0, 0, 1, 1): 4,\n    (1, 0, 1, 1, 0, 1, 1): 5,\n    (1, 0, 1, 1, 1, 1, 1): 6,\n    (1, 1, 1, 0, 0, 1, 0): 7,\n    (1, 1, 1, 1, 1, 1, 1): 8,\n    (1, 1, 1, 1, 0, 1, 1): 9,\n}\n\n# step 2\nroi = cv2.imread(\"inter/ocbc-roi.png\", flags=0)\n\n# step 3\nRATIO = roi.shape[0] * 0.2\nroi = cv2.bilateralFilter(roi, 5, 30, 60)\ntrimmed = roi[int(RATIO) :, int(RATIO) : roi.shape[1] - int(RATIO)]\n\n# step 4\nedged = cv2.adaptiveThreshold(\n    trimmed, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 5, 5\n)\n\n# step 5\nkernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 5))\ndilated = cv2.dilate(edged, kernel, iterations=1)\neroded = cv2.erode(dilated, kernel, iterations=1)\n\n# step 6\ncnts, _ = cv2.findContours(eroded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\ndigits_cnts = []\nfor cnt in cnts:\n    (x, y, w, h) = cv2.boundingRect(cnt)\n    if h > 20:\n        digits_cnts += [cnt]\n\n# step 7\nsorted_digits = sorted(digits_cnts, key=lambda cnt: cv2.boundingRect(cnt)[0])\n\n# step 8\ndigits = []\nfor cnt in sorted_digits:\n    # step 8a\n    (x, y, w, h) = cv2.boundingRect(cnt)\n    roi = eroded[y : y + h, x : x + w]\n    qW, qH = int(w * 0.25), int(h * 0.15)\n    fractionH, halfH, fractionW = int(h * 0.05), int(h * 0.5), int(w * 0.25)\n\n    # step 8b\n    sevensegs = [\n        ((0, 0), (w, qH)),  # a (top bar)\n        ((w - qW, 0), (w, halfH)),  # b (upper right)\n        ((w - qW, halfH), (w, h)),  # c (lower right)\n        ((0, h - qH), (w, h)),  # d (lower bar)\n        ((0, halfH), (qW, h)),  # e (lower left)\n        ((0, 0), (qW, halfH)),  # f (upper left)\n        # ((0, halfH - fractionH), (w, halfH + fractionH)) # center\n        (\n            (0 + fractionW, halfH - fractionH),\n            (w - fractionW, halfH + fractionH),\n        ),  # center\n    ]\n\n    # step 8c\n    on = [0] * 7\n    for (i, ((p1x, p1y), (p2x, p2y))) in enumerate(sevensegs):\n        region = roi[p1y:p2y, p1x:p2x]\n        print(\n            f\"{i}: Sum of 1: {np.sum(region == 255)}, Sum of 0: {np.sum(region == 0)}, Shape: {region.shape}, Size: {region.size}\"\n        )\n        if np.sum(region == 255) > region.size * 0.5:\n            on[i] = 1\n        print(f\"State of ON: {on}\")\n    # step 8d\n    digit = DIGITSDICT[tuple(on)]\n    print(f\"Digit is: {digit}\")\n    digits += [digit]\n    # step 9\n    cv2.rectangle(canvas, (x, y), (x + w, y + h), CYAN, 1)\n    cv2.putText(canvas, str(digit), (x - 5, y + 6), FONT, 0.3, (0, 0, 0), 1)\n    cv2.imshow(\"Digit\", canvas)\n    cv2.waitKey(0)\nprint(f\"Digits on the token are: {digits}\")\n```\n\n- Step 1: Initialize the lookup dictionary\n- Step 2: Read our ROI image using OpenCV\n- Step 3: Noise reduction and trim away asymmetrical white space in our ROI\n- Step 4: Binarize our image using adaptive thresholding\n- Step 5: Morphological transformation to remove noise and fill the small holes in our digit\n- Step 6: Find contours in our image with a height greater than 20px\n- Step 7: Sort the contours in-place, using the x value of their coordinates (hence, left to right)\n- Step 8\n    - Step 8a: Create rectangle bounding box on each digit, and some convenience units that we later use to slice the seven segments. Notice that these convenience units are not hard-coded values, but are proportional to the Height (`h`) of our rectangular box\n    - Step 8b: Slice the seven segments; The first segment (\"A\") is from point (0,0) to (w, `int(h * 0.15)`); This segment is `w` in width and 15% the height of the full digit contour, starting from position (0, 0)\n    -  Step 8c: Initialize the state to `0` for each of the 7 segments, then conditionally set regions with more white than black pixels to `1`\n    -  Step 8d: Once all 7 states have been set, perform lookup against the digit dictionary created in step 1; Append the value to the `digits` list created at the beginning of step 8\n- Step 9: Draw rectangle and add predicted text for each bounding box. Finally, use a print statement to print the `digits` list. \n\n\n# References\n[^1]: LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278–2324\n[^2]: Saliency map, Wikipedia\n[^3]: Morphological Transformations, OpenCV Documentation\n[^4]: Seven-segment display, Wikipedia\n[^5]: Seven-segment display character representations, Wikipedia\n\n\n\n"
  },
  {
    "path": "digitrecognition/morphological_01.py",
    "content": "import cv2\nimport matplotlib.pyplot as plt\n\nroi = cv2.imread(\"inter/ocbc-roi.png\", flags=0)\nblurred = cv2.bilateralFilter(roi, 5, 30, 60)\nedged = cv2.adaptiveThreshold(\n    blurred, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 5, 5\n)\nkernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2, 5))\ndilated = cv2.dilate(edged, kernel, iterations=1)\n\nplt.subplot(2, 2, 1), plt.imshow(roi, cmap=\"gray\")\nplt.title(\"Original\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 2), plt.imshow(blurred, cmap=\"gray\")\nplt.title(\"Blurred\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 3), plt.imshow(edged, cmap=\"gray\")\nplt.title(\"Edged\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 4), plt.imshow(dilated, cmap=\"gray\")\nplt.title(\"Dilated\"), plt.xticks([]), plt.yticks([])\nplt.show()\n\n"
  },
  {
    "path": "digitrecognition/morphological_02.py",
    "content": "import cv2\nimport matplotlib.pyplot as plt\n\nroi = cv2.imread(\"assets/0417s.png\", flags=0)\ncv2.imshow(\"Original\", roi)\ncv2.waitKey(0)\n\n_, thresh = cv2.threshold(roi, 170, 255, cv2.THRESH_BINARY)\n# thresh = cv2.adaptiveThreshold(dilated, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 5, 5)\ncv2.imshow(\"Threshold\", thresh)\ncv2.waitKey(0)\n\ninv = cv2.bitwise_not(thresh)\ncv2.imshow(\"Inverted\", inv)\ncv2.waitKey(0)\n\n\nkernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (4, 4))\neroded = cv2.erode(inv, kernel, iterations=1)\n\ncv2.imshow(\"Eroded\", eroded)\ncv2.waitKey(0)\n\nkernel = cv2.getStructuringElement(cv2.MORPH_RECT, (6, 6))\ndilated = cv2.dilate(eroded, kernel, iterations=1)\n\ncv2.imshow(\"Dilated\", dilated)\ncv2.waitKey(0)\n\n\nplt.subplot(2, 2, 1), plt.imshow(roi, cmap=\"gray\")\nplt.title(\"Original\"), plt.xticks([]), plt.yticks([])\n\nplt.subplot(2, 2, 2), plt.imshow(thresh, cmap=\"gray\")\nplt.title(\"Thresholded\"), plt.xticks([]), plt.yticks([])\n\nplt.subplot(2, 2, 3), plt.imshow(inv, cmap=\"gray\")\nplt.title(\"Inverted\"), plt.xticks([]), plt.yticks([])\n\nplt.subplot(2, 2, 4), plt.imshow(dilated, cmap=\"gray\")\nplt.title(\"Transformed\"), plt.xticks([]), plt.yticks([])\n\nplt.show()\n\n"
  },
  {
    "path": "digitrecognition/roi_01.py",
    "content": "import cv2\nBCOLOR = (75, 0, 130)\nTHICKNESS = 4\n\nimg_color = cv2.imread(\"assets/ocbc.jpg\")\nimg_color = cv2.resize(img_color, None, None, fx=0.5, fy=0.5)\nimg = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)\n\nx,y,w,h = cv2.selectROI(\"Region of interest\", img)\nprint(x,y,w,h)\n\ncropped = img[y:y+h, x:x+w]\ncv2.imshow(\"Cropped\", cropped)\ncv2.waitKey(0)\n\ncv2.rectangle(img_color, (x,y), (x+w,y+h), (255,0,0), 2)\ncv2.imshow(\"Original Image\", img_color)\ncv2.waitKey(0)\n"
  },
  {
    "path": "digitrecognition/roi_02.py",
    "content": "import cv2\nimport argparse\nimport re\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"-p\", \"--path\", default=\"assets/ocbc.jpg\")\nargs = vars(parser.parse_args())\n\n# test: dbs.jpg | ocbc.jpg\nimg_color = cv2.imread(args[\"path\"])\nimg_color = cv2.resize(img_color, None, None, fx=0.5, fy=0.5)\nimg = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)\n\nblurred = cv2.GaussianBlur(img, (7, 7), 0)\nblurred = cv2.bilateralFilter(blurred, 5, sigmaColor=50, sigmaSpace=50)\nedged = cv2.Canny(blurred, 130, 150, 255)\n\ncv2.imshow(\"Outline of device\", edged)\ncv2.waitKey(0)\n\ncnts, _ = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n# sort contours by area, and get the largest\ncnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:1]\n\ncv2.drawContours(img_color, cnts, 0, (75, 0, 130), 4)\ncv2.imshow(\"Target Contour\", img_color)\ncv2.waitKey(0)\n\nx, y, w, h = cv2.boundingRect(cnts[0])\nroi = img[y : y + h, x : x + w]\ncv2.imshow(\"ROI\", roi)\n\nimg_name = re.search(\"(?<=\\/)(.*)(?=\\.jpg)\", args[\"path\"]).group(1)\n\ncv2.imwrite(f\"inter/{img_name}-roi.png\", roi)\ncv2.waitKey(0)\n"
  },
  {
    "path": "digitrecognition/utils/enumerate.py",
    "content": "digits = ['a', 'b', 'c', 'd']\n\ncontracts = {\n    # salesperson: contract value, duration\n    'adam':(500, 2),\n    'brian':(300, 1.5),\n    'canny':(1000, 4)\n}\n\n# for i in range(len(digits)):\n#     print(i, digits[i])\n# better written as:\nfor i, d in enumerate(digits):\n    print(i, d)\n\nprint('---')\nprint(dict(enumerate(digits)))\n\nfor i, c in enumerate(contracts):\n    print(i, c)\n\nfor i, v in enumerate(contracts.values()):\n    print(i, v)\n\nd = {i+1:(k,f'${v1} for {v2} years') for i, (k,(v1, v2)) in enumerate(contracts.items())}\nprint(d)"
  },
  {
    "path": "edgedetect/adaptivethresholding_01.py",
    "content": "import cv2\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimg = cv2.imread(\"assets/sudoku.jpg\", flags=0)\n_, img_threshold = cv2.threshold(img, 50, 255, cv2.THRESH_BINARY)\n\nimg = cv2.medianBlur(img, 5)\n\nmean_adaptive = cv2.adaptiveThreshold(\n    img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 2\n)\ngaussian_adaptive = cv2.adaptiveThreshold(\n    img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2\n)\n\nplt.subplot(2, 2, 1), plt.imshow(img, cmap=\"gray\")\nplt.title(\"Original\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 2), plt.imshow(img_threshold, cmap=\"gray\")\nplt.title(\"Binary Threshold (global:50)\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 3), plt.imshow(mean_adaptive, cmap=\"gray\")\nplt.title(\"Mean Adaptive\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 4), plt.imshow(gaussian_adaptive, cmap=\"gray\")\nplt.title(\"Gaussian Adaptive\"), plt.xticks([]), plt.yticks([])\nplt.show()\n\n"
  },
  {
    "path": "edgedetect/canny_01.py",
    "content": "import numpy as np\nimport cv2\nimport matplotlib.pyplot as plt\n\nimg = cv2.imread(\"assets/castello.png\", flags=0)\nimg = cv2.medianBlur(img, 9)\nimg = cv2.GaussianBlur(img, (9, 9), 0)\n\ndef sobel(img, k):\n    gradient_x = cv2.Sobel(img, cv2.CV_64F, 1, 0)\n    gradient_y = cv2.Sobel(img, cv2.CV_64F, 0, 1)   \n    gradient_x = cv2.convertScaleAbs(gradient_x)\n    gradient_y = cv2.convertScaleAbs(gradient_y)\n\n    return cv2.addWeighted(gradient_x, 0.5, gradient_y, 0.5, 0)\n\nsobel = sobel(img, 3)\ncanny = cv2.Canny(img, 50, 180)\n\n\nplt.subplot(1, 2, 1)\nplt.imshow(sobel, cmap=\"gray\")\nplt.title(\"Sobel Edge Detector\"), plt.xticks([]), plt.yticks([])\n\nplt.subplot(1, 2, 2)\nplt.imshow(canny, cmap=\"gray\")\nplt.title(\"Canny Edge Detector\"), plt.xticks([]), plt.yticks([])\nplt.show()"
  },
  {
    "path": "edgedetect/contour_01.py",
    "content": "import cv2\nimport numpy as np\n\n\nimage = cv2.imread(\"assets/pens.png\")\nimage = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\ncv2.imshow(\"Grayscale\", image)\ncv2.waitKey(0)\n\nimage = cv2.GaussianBlur(image, (3, 3), 0)\ncv2.imshow(\"After Smoothing\", image)\ncv2.waitKey(0)\n\n\ndef sobel(image):\n    # run with col.png for best effect\n    # cv2.Sobel last 2 argument -> order of derivatives in x and y direction respectively\n    sobelX = cv2.Sobel(image, cv2.CV_64F, 1, 0)  # find vertical edges\n    sobelY = cv2.Sobel(image, cv2.CV_64F, 0, 1)  # find horizontal edges along y-axis\n\n    gradient_x = np.uint8(np.absolute(sobelX))\n    gradient_y = np.uint8(np.absolute(sobelY))\n\n    sobelCombined = cv2.bitwise_or(gradient_x, gradient_y)\n    cv2.imshow(\"Sobel Combined\", sobelCombined)\n    cv2.waitKey(0)\n    return sobelCombined\n\n\ndef counting_penguins(sobel, image):\n    sobeled = sobel(image)\n    _, edged = cv2.threshold(sobeled, 20, 255, cv2.THRESH_BINARY)\n    cv2.imshow(\"(Edged)\", edged)\n    cv2.waitKey(0)\n    cnts, _ = cv2.findContours(\n        # does this need to be changed?\n        edged,\n        cv2.RETR_EXTERNAL,\n        cv2.CHAIN_APPROX_SIMPLE,\n    )\n\n    canvas = np.ones(image.shape)\n    cv2.drawContours(canvas, cnts, -1, (0, 255, 255), 1)\n    cv2.imshow(\"Contour\", canvas)\n    cv2.waitKey(0)\n\n    print(f\"Found {len(cnts)} penguins\")\n\n\nif __name__ == \"__main__\":\n    counting_penguins(sobel, image)\n"
  },
  {
    "path": "edgedetect/contourapprox.py",
    "content": "import cv2\nimport numpy as np\n\n\nimage = cv2.imread(\"homework/equal.png\", flags=0)\ncv2.imshow(\"Original\", image)\ncv2.waitKey(0)\n\n\ndef edge(image):\n    _, edged = cv2.threshold(image, 220, 255, cv2.THRESH_BINARY_INV)\n    cv2.imshow(\"(Edged)\", edged)\n    cv2.waitKey(0)\n    cnts, _ = cv2.findContours(\n        # does this need to be changed?\n        edged,\n        cv2.RETR_EXTERNAL,\n        cv2.CHAIN_APPROX_SIMPLE,\n    )\n    print(f\"Cnts Simple Shape (1): {cnts[0].shape}\")\n    print(f\"Cnts Simple Shape (2): {cnts[0].shape}\")\n    cnts2, _ = cv2.findContours(\n        # does this need to be changed?\n        edged,\n        cv2.RETR_EXTERNAL,\n        cv2.CHAIN_APPROX_NONE,\n    )\n    print(f\"Cnts NoApprox Shape:{cnts2[0].shape}\")\n    print(cnts)\n    canvas = np.ones(image.shape)\n    cv2.drawContours(canvas, cnts, -1, (0, 255, 255), 1)\n    cv2.imshow(\"Contour\", canvas)\n    cv2.waitKey(0)\n    print(f\"Found {len(cnts)} shapes!\")\n\n\nif __name__ == \"__main__\":\n    edge(image)\n"
  },
  {
    "path": "edgedetect/edgedetect.html",
    "content": "<!DOCTYPE html><html><head>\n      <title>edgedetect</title>\n      <meta charset=\"utf-8\">\n      <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n      \n      \n        <script type=\"text/x-mathjax-config\">\n          MathJax.Hub.Config({\"extensions\":[\"tex2jax.js\"],\"jax\":[\"input/TeX\",\"output/HTML-CSS\"],\"messageStyle\":\"none\",\"tex2jax\":{\"processEnvironments\":false,\"processEscapes\":true,\"inlineMath\":[[\"$\",\"$\"],[\"\\\\(\",\"\\\\)\"]],\"displayMath\":[[\"$$\",\"$$\"],[\"\\\\[\",\"\\\\]\"]]},\"TeX\":{\"extensions\":[\"AMSmath.js\",\"AMSsymbols.js\",\"noErrors.js\",\"noUndefined.js\"]},\"HTML-CSS\":{\"availableFonts\":[\"TeX\"]}});\n        </script>\n        <script type=\"text/javascript\" async src=\"file:////Users/samuel/.vscode/extensions/shd101wyy.markdown-preview-enhanced-0.5.0/node_modules/@shd101wyy/mume/dependencies/mathjax/MathJax.js\" charset=\"UTF-8\"></script>\n        \n      \n      \n\n      \n      \n      \n      \n      \n      \n      \n\n      <style>\n      /**\n * prism.js Github theme based on GitHub's theme.\n * @author Sam Clarke\n */\ncode[class*=\"language-\"],\npre[class*=\"language-\"] {\n  color: #333;\n  background: none;\n  font-family: Consolas, \"Liberation Mono\", Menlo, Courier, monospace;\n  text-align: left;\n  white-space: pre;\n  word-spacing: normal;\n  word-break: normal;\n  word-wrap: normal;\n  line-height: 1.4;\n\n  -moz-tab-size: 8;\n  -o-tab-size: 8;\n  tab-size: 8;\n\n  -webkit-hyphens: none;\n  -moz-hyphens: none;\n  -ms-hyphens: none;\n  hyphens: none;\n}\n\n/* Code blocks */\npre[class*=\"language-\"] {\n  padding: .8em;\n  overflow: auto;\n  /* border: 1px solid #ddd; */\n  border-radius: 3px;\n  /* background: #fff; */\n  background: #f5f5f5;\n}\n\n/* Inline code */\n:not(pre) > code[class*=\"language-\"] {\n  padding: .1em;\n  border-radius: .3em;\n  white-space: normal;\n  background: #f5f5f5;\n}\n\n.token.comment,\n.token.blockquote {\n  color: #969896;\n}\n\n.token.cdata {\n  color: #183691;\n}\n\n.token.doctype,\n.token.punctuation,\n.token.variable,\n.token.macro.property {\n  color: #333;\n}\n\n.token.operator,\n.token.important,\n.token.keyword,\n.token.rule,\n.token.builtin {\n  color: #a71d5d;\n}\n\n.token.string,\n.token.url,\n.token.regex,\n.token.attr-value {\n  color: #183691;\n}\n\n.token.property,\n.token.number,\n.token.boolean,\n.token.entity,\n.token.atrule,\n.token.constant,\n.token.symbol,\n.token.command,\n.token.code {\n  color: #0086b3;\n}\n\n.token.tag,\n.token.selector,\n.token.prolog {\n  color: #63a35c;\n}\n\n.token.function,\n.token.namespace,\n.token.pseudo-element,\n.token.class,\n.token.class-name,\n.token.pseudo-class,\n.token.id,\n.token.url-reference .token.variable,\n.token.attr-name {\n  color: #795da3;\n}\n\n.token.entity {\n  cursor: help;\n}\n\n.token.title,\n.token.title .token.punctuation {\n  font-weight: bold;\n  color: #1d3e81;\n}\n\n.token.list {\n  color: #ed6a43;\n}\n\n.token.inserted {\n  background-color: #eaffea;\n  color: #55a532;\n}\n\n.token.deleted {\n  background-color: #ffecec;\n  color: #bd2c00;\n}\n\n.token.bold {\n  font-weight: bold;\n}\n\n.token.italic {\n  font-style: italic;\n}\n\n\n/* JSON */\n.language-json .token.property {\n  color: #183691;\n}\n\n.language-markup .token.tag .token.punctuation {\n  color: #333;\n}\n\n/* CSS */\ncode.language-css,\n.language-css .token.function {\n  color: #0086b3;\n}\n\n/* YAML */\n.language-yaml .token.atrule {\n  color: #63a35c;\n}\n\ncode.language-yaml {\n  color: #183691;\n}\n\n/* Ruby */\n.language-ruby .token.function {\n  color: #333;\n}\n\n/* Markdown */\n.language-markdown .token.url {\n  color: #795da3;\n}\n\n/* Makefile */\n.language-makefile .token.symbol {\n  color: #795da3;\n}\n\n.language-makefile .token.variable {\n  color: #183691;\n}\n\n.language-makefile .token.builtin {\n  color: #0086b3;\n}\n\n/* Bash */\n.language-bash .token.keyword {\n  color: #0086b3;\n}\n\n/* highlight */\npre[data-line] {\n  position: relative;\n  padding: 1em 0 1em 3em;\n}\npre[data-line] .line-highlight-wrapper {\n  position: absolute;\n  top: 0;\n  left: 0;\n  background-color: transparent;\n  display: block;\n  width: 100%;\n}\n\npre[data-line] .line-highlight {\n  position: absolute;\n  left: 0;\n  right: 0;\n  padding: inherit 0;\n  margin-top: 1em;\n  background: hsla(24, 20%, 50%,.08);\n  background: linear-gradient(to right, hsla(24, 20%, 50%,.1) 70%, hsla(24, 20%, 50%,0));\n  pointer-events: none;\n  line-height: inherit;\n  white-space: pre;\n}\n\npre[data-line] .line-highlight:before, \npre[data-line] .line-highlight[data-end]:after {\n  content: attr(data-start);\n  position: absolute;\n  top: .4em;\n  left: .6em;\n  min-width: 1em;\n  padding: 0 .5em;\n  background-color: hsla(24, 20%, 50%,.4);\n  color: hsl(24, 20%, 95%);\n  font: bold 65%/1.5 sans-serif;\n  text-align: center;\n  vertical-align: .3em;\n  border-radius: 999px;\n  text-shadow: none;\n  box-shadow: 0 1px white;\n}\n\npre[data-line] .line-highlight[data-end]:after {\n  content: attr(data-end);\n  top: auto;\n  bottom: .4em;\n}html body{font-family:\"Helvetica Neue\",Helvetica,\"Segoe UI\",Arial,freesans,sans-serif;font-size:16px;line-height:1.6;color:#333;background-color:#fff;overflow:initial;box-sizing:border-box;word-wrap:break-word}html body>:first-child{margin-top:0}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{line-height:1.2;margin-top:1em;margin-bottom:16px;color:#000}html body h1{font-size:2.25em;font-weight:300;padding-bottom:.3em}html body h2{font-size:1.75em;font-weight:400;padding-bottom:.3em}html body h3{font-size:1.5em;font-weight:500}html body h4{font-size:1.25em;font-weight:600}html body h5{font-size:1.1em;font-weight:600}html body h6{font-size:1em;font-weight:600}html body h1,html body h2,html body h3,html body h4,html body h5{font-weight:600}html body h5{font-size:1em}html body h6{color:#5c5c5c}html body strong{color:#000}html body del{color:#5c5c5c}html body a:not([href]){color:inherit;text-decoration:none}html body a{color:#08c;text-decoration:none}html body a:hover{color:#00a3f5;text-decoration:none}html body img{max-width:100%}html body>p{margin-top:0;margin-bottom:16px;word-wrap:break-word}html body>ul,html body>ol{margin-bottom:16px}html body ul,html body ol{padding-left:2em}html body ul.no-list,html body ol.no-list{padding:0;list-style-type:none}html body ul ul,html body ul ol,html body ol ol,html body ol ul{margin-top:0;margin-bottom:0}html body li{margin-bottom:0}html body li.task-list-item{list-style:none}html body li>p{margin-top:0;margin-bottom:0}html body .task-list-item-checkbox{margin:0 .2em .25em -1.8em;vertical-align:middle}html body .task-list-item-checkbox:hover{cursor:pointer}html body blockquote{margin:16px 0;font-size:inherit;padding:0 15px;color:#5c5c5c;border-left:4px solid #d6d6d6}html body blockquote>:first-child{margin-top:0}html body blockquote>:last-child{margin-bottom:0}html body hr{height:4px;margin:32px 0;background-color:#d6d6d6;border:0 none}html body table{margin:10px 0 15px 0;border-collapse:collapse;border-spacing:0;display:block;width:100%;overflow:auto;word-break:normal;word-break:keep-all}html body table th{font-weight:bold;color:#000}html body table td,html body table th{border:1px solid #d6d6d6;padding:6px 13px}html body dl{padding:0}html body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:bold}html body dl dd{padding:0 16px;margin-bottom:16px}html body code{font-family:Menlo,Monaco,Consolas,'Courier New',monospace;font-size:.85em !important;color:#000;background-color:#f0f0f0;border-radius:3px;padding:.2em 0}html body code::before,html body code::after{letter-spacing:-0.2em;content:\"\\00a0\"}html body pre>code{padding:0;margin:0;font-size:.85em !important;word-break:normal;white-space:pre;background:transparent;border:0}html body .highlight{margin-bottom:16px}html body .highlight pre,html body pre{padding:1em;overflow:auto;font-size:.85em !important;line-height:1.45;border:#d6d6d6;border-radius:3px}html body .highlight pre{margin-bottom:0;word-break:normal}html body pre code,html body pre tt{display:inline;max-width:initial;padding:0;margin:0;overflow:initial;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}html body pre code:before,html body pre tt:before,html body pre code:after,html body pre tt:after{content:normal}html body p,html body blockquote,html body ul,html body ol,html body dl,html body pre{margin-top:0;margin-bottom:16px}html body kbd{color:#000;border:1px solid #d6d6d6;border-bottom:2px solid #c7c7c7;padding:2px 4px;background-color:#f0f0f0;border-radius:3px}@media print{html body{background-color:#fff}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{color:#000;page-break-after:avoid}html body blockquote{color:#5c5c5c}html body pre{page-break-inside:avoid}html body table{display:table}html body img{display:block;max-width:100%;max-height:100%}html body pre,html body code{word-wrap:break-word;white-space:pre}}.markdown-preview{width:100%;height:100%;box-sizing:border-box}.markdown-preview .pagebreak,.markdown-preview .newpage{page-break-before:always}.markdown-preview pre.line-numbers{position:relative;padding-left:3.8em;counter-reset:linenumber}.markdown-preview pre.line-numbers>code{position:relative}.markdown-preview pre.line-numbers .line-numbers-rows{position:absolute;pointer-events:none;top:1em;font-size:100%;left:0;width:3em;letter-spacing:-1px;border-right:1px solid #999;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.markdown-preview pre.line-numbers .line-numbers-rows>span{pointer-events:none;display:block;counter-increment:linenumber}.markdown-preview pre.line-numbers .line-numbers-rows>span:before{content:counter(linenumber);color:#999;display:block;padding-right:.8em;text-align:right}.markdown-preview .mathjax-exps .MathJax_Display{text-align:center !important}.markdown-preview:not([for=\"preview\"]) .code-chunk .btn-group{display:none}.markdown-preview:not([for=\"preview\"]) .code-chunk .status{display:none}.markdown-preview:not([for=\"preview\"]) .code-chunk .output-div{margin-bottom:16px}.scrollbar-style::-webkit-scrollbar{width:8px}.scrollbar-style::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}.scrollbar-style::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for=\"html-export\"]:not([data-presentation-mode]){position:relative;width:100%;height:100%;top:0;left:0;margin:0;padding:0;overflow:auto}html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{position:relative;top:0}@media screen and (min-width:914px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{padding:2em calc(50% - 457px + 2em)}}@media screen and (max-width:914px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{font-size:14px !important;padding:1em}}@media print{html body[for=\"html-export\"]:not([data-presentation-mode]) #sidebar-toc-btn{display:none}}html body[for=\"html-export\"]:not([data-presentation-mode]) #sidebar-toc-btn{position:fixed;bottom:8px;left:8px;font-size:28px;cursor:pointer;color:inherit;z-index:99;width:32px;text-align:center;opacity:.4}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] #sidebar-toc-btn{opacity:1}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc{position:fixed;top:0;left:0;width:300px;height:100%;padding:32px 0 48px 0;font-size:14px;box-shadow:0 0 4px rgba(150,150,150,0.33);box-sizing:border-box;overflow:auto;background-color:inherit}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar{width:8px}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc a{text-decoration:none}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{padding:0 1.6em;margin-top:.8em}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc li{margin-bottom:.8em}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{list-style-type:none}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{left:300px;width:calc(100% -  300px);padding:2em calc(50% - 457px -  150px);margin:0;box-sizing:border-box}@media screen and (max-width:1274px){html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{width:100%}}html body[for=\"html-export\"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .markdown-preview{left:50%;transform:translateX(-50%)}html body[for=\"html-export\"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .md-sidebar-toc{display:none}\n/* Please visit the URL below for more information: */\n/*   https://shd101wyy.github.io/markdown-preview-enhanced/#/customize-css */\n.markdown-preview.markdown-preview h1,\n.markdown-preview.markdown-preview h2,\n.markdown-preview.markdown-preview h3,\n.markdown-preview.markdown-preview h4,\n.markdown-preview.markdown-preview h5,\n.markdown-preview.markdown-preview h6 {\n  font-weight: bolder;\n  text-decoration-line: underline;\n}\n\n      </style>\n    </head>\n    <body for=\"html-export\">\n      <div class=\"mume markdown-preview  \">\n      <h1 class=\"mume-header\" id=\"definition\">Definition</h1>\n\n<p>An edge can be defined as boundary between regions in an image<sup class=\"footnote-ref\"><a href=\"#fn1\" id=\"fnref1\">[1]</a></sup>. Edge detection techniques we&apos;ll learn in this course builds upon what we&apos;ve learned from our lessons in kernel convolution. It is the process of using kernels to reduce the information in our data and preserving only the necessary structural properties in our image<sup class=\"footnote-ref\"><a href=\"#fn1\" id=\"fnref1:1\">[1:1]</a></sup>.</p>\n<h1 class=\"mume-header\" id=\"gradient-based-edge-detection\">Gradient-based Edge Detection</h1>\n\n<p>Gradient points in the direction of the most rapid increase in intensity. When we apply a gradient based edge detection method, we are searching for the maximum and minimum in the first derivative of the image.</p>\n<p>When we apply our convolution onto the image, we are finding for regions in the image where there&apos;s a sharp change in intensity or color. Arguably the most common edge detection method using this approach is the Sobel Operator.</p>\n<h2 class=\"mume-header\" id=\"sobel-operator\">Sobel Operator</h2>\n\n<p>The <code>Sobel</code> operator applies a filtering operation to produce an image output where the edge is emphasized. It convolves our original image using two 3x3 kernels to capture approximations of the derivatives in both the horizontal and vertical directions.</p>\n<p>The x-direction and y-direction kernels would be:</p>\n<p></p><div class=\"mathjax-exps\">$$G_x = \\begin{bmatrix} 1 &amp; 0 &amp; -1 \\\\ 2 &amp; 0 &amp; -2 \\\\ 1 &amp; 0 &amp; -1  \\end{bmatrix}  G_y = \\begin{bmatrix} 1 &amp; 2 &amp; 1 \\\\ 0 &amp; 0 &amp; 0 \\\\ -1 &amp; -2 &amp; -1  \\end{bmatrix}$$</div><p></p>\n<p>Each kernel is applied separately to obtain the gradient component in each orientation, <span class=\"mathjax-exps\">$G_x$</span> and <span class=\"mathjax-exps\">$G_y$</span>. Expressed in formula, the gradient magnitude is:<br>\n</p><div class=\"mathjax-exps\">$$|G| = \\sqrt{G^2_x + G^2_y}$$</div><p></p>\n<p>Where the slope <span class=\"mathjax-exps\">$\\theta$</span> of the gradient is calculated as follow:<br>\n</p><div class=\"mathjax-exps\">$$\\theta(x,y)=tan^{-1}(\\frac{G_y}{G_x})$$</div><p></p>\n<p>If the two formula above confuses you, read on as we unpack these ideas one at a time.</p>\n<h3 class=\"mume-header\" id=\"intuition-discrete-derivative\">Intuition: Discrete Derivative</h3>\n\n<p>In computer vision literature, you&apos;ll often hear about &quot;taking the derivative&quot; and this may erve as a source of confusion for beginning practitioners since &quot;derivatives&quot; is often thought of in the context of a continuous function. Images are a 2D matrix of discrete values, so how do we wrap our head around the idea of finding derivative?</p>\n<p>But why do we even bother with derivatives when this course is suppopsed to be about edge detection in images?</p>\n<p><img src=\"assets/derivatives.png\" alt></p>\n<p>Among the many ways to answer the question, my favorite being that image is really just a function. When it treat an image as a function, the utility of taking derivatives become a little more obvious. In the image below, supposed you want to count the number of windows in this area of Venezia Sestiere Cannaregio, your program can look for large derivatives since there are sharp changes in pixel intensity from the windows to the surrounding wall:</p>\n<p><img src=\"assets/surface.png\" alt></p>\n<p>The code to generate the surface plot above is in <code>img2surface.py</code>.</p>\n<p>Going back to our x-direction kernel in the Sobel Operator.<br>\nThis kernel has all 0 in the middle, which is quite easy to intuit about. Essentially, for each pixel in our image, we want to compute its derivative in the x-direction by approximating a formula that you may have come across in your calculus class:</p>\n<p></p><div class=\"mathjax-exps\">$$f&apos;(x) = \\lim_{h\\to0}\\frac{f(x+h)-f(x)}{h}$$</div><p></p>\n<p>This approximation is also called &apos;forward difference&apos;, because we&apos;re taking a value of <span class=\"mathjax-exps\">$x$</span>, and computing the difference in <span class=\"mathjax-exps\">$f(x)$</span> as we increment it by a small amount forward, denoted as <span class=\"mathjax-exps\">$h$</span>.</p>\n<p>And as it turns out, using the &apos;central difference&apos; to compute the derivative of our discrete signal can deliver better results<sup class=\"footnote-ref\"><a href=\"#fn2\" id=\"fnref2\">[2]</a></sup>:</p>\n<p></p><div class=\"mathjax-exps\">$$f&apos;(x) = \\lim_{h\\to0}\\frac{f(x+0.5h)-f(x-0.5h)}{h}$$</div><p></p>\n<p>To make this more concrete, we can plug the formula into an actual array of pixels:</p>\n<p></p><div class=\"mathjax-exps\">$$[0, 255, 65, \\underline{180}, 255, 255, 255]$$</div><p></p>\n<p>when we set <span class=\"mathjax-exps\">$h=2$</span> at the center pixel (index of value 180), we have the following:</p>\n<p></p><div class=\"mathjax-exps\">$$\\begin{aligned} f&apos;(x) &amp; = \\lim_{h\\to0}\\frac{f(x+0.5h)-f(x-0.5h)}{h}\\\\ &amp; = \\frac{f(x+1)-f(x-1)}{2} \\\\ &amp; = \\frac{255-65}{2} \\\\  &amp; = 95 \\end{aligned}$$</div><p></p>\n<p>Notice that a large part of the calculation we just perform is synonymous to a 1D convolution operation using a <span class=\"mathjax-exps\">$\\begin{bmatrix} -1 &amp; 0 &amp;  1 \\end{bmatrix}$</span> kernel.</p>\n<p>When the same 1x3 kernel <span class=\"mathjax-exps\">$\\begin{bmatrix} -1 &amp; 0 &amp;  1 \\end{bmatrix}$</span> is applied on the right-most part of the image where its just white space ([..., 255, 255, 255]) the kernel would evaluate to 0. In other words, our derivative filter returns no response where it can&apos;t detect a sharp change in pixel intensity.</p>\n<p>As a reminder, the x-direction kernel in our Sobel Operator is the following:<br>\n</p><div class=\"mathjax-exps\">$$G_x = \\begin{bmatrix} 1 &amp; 0 &amp; -1 \\\\ 2 &amp; 0 &amp; -2 \\\\ 1 &amp; 0 &amp; -1  \\end{bmatrix}$$</div><p></p>\n<p>This takes our 1x3 kernel and instead of convolving one row of pixels at a time, extends it to convolve at 3x3 neighborhoods at a time using a weighted average approach.</p>\n<h3 class=\"mume-header\" id=\"code-illustrations-sobel-operator\">Code Illustrations: Sobel Operator</h3>\n\n<p>The two kernels (one for horizontal and another for vertical edge detection) can be constructed, respectively, like the following:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">sobel_x <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>array<span class=\"token punctuation\">(</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">[</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n                    <span class=\"token punctuation\">[</span><span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">2</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n                    <span class=\"token punctuation\">[</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\n\nsobel_y <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>array<span class=\"token punctuation\">(</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">[</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n                    <span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n                    <span class=\"token punctuation\">[</span><span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\n</pre><p>You may have guessed that, given its role in digital image processing, <code>opencv</code> have included a method that performs our Sobel Operator for us, and thankfully there is. Here&apos;s an example of using the <code>cv2.Sobel(src, ddepth, dx, dy, dst=None, ksize)</code> method:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">gradient_x <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>Sobel<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>CV_64F<span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> ksize<span class=\"token operator\">=</span><span class=\"token number\">3</span><span class=\"token punctuation\">)</span>\ngradient_y <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>Sobel<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>CV_64F<span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> ksize<span class=\"token operator\">=</span><span class=\"token number\">3</span><span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&quot;Range: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">min</span><span class=\"token punctuation\">(</span>gradient_x<span class=\"token punctuation\">)</span><span class=\"token punctuation\">}</span></span><span class=\"token string\"> | </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">max</span><span class=\"token punctuation\">(</span>gradient_x<span class=\"token punctuation\">)</span><span class=\"token punctuation\">}</span></span><span class=\"token string\">&quot;</span></span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># Range: -177.0 | 204.0</span>\n\ngradient_x <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>uint8<span class=\"token punctuation\">(</span>np<span class=\"token punctuation\">.</span>absolute<span class=\"token punctuation\">(</span>gradient_x<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\ngradient_y <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>uint8<span class=\"token punctuation\">(</span>np<span class=\"token punctuation\">.</span>absolute<span class=\"token punctuation\">(</span>gradient_y<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&quot;Range uint8: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">min</span><span class=\"token punctuation\">(</span>gradient_x<span class=\"token punctuation\">)</span><span class=\"token punctuation\">}</span></span><span class=\"token string\"> | </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">max</span><span class=\"token punctuation\">(</span>gradient_x<span class=\"token punctuation\">)</span><span class=\"token punctuation\">}</span></span><span class=\"token string\">&quot;</span></span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># Range uint8: 0 | 204</span>\n\ncv2<span class=\"token punctuation\">.</span>imshow<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Gradient X&quot;</span><span class=\"token punctuation\">,</span> gradient_x<span class=\"token punctuation\">)</span>\ncv2<span class=\"token punctuation\">.</span>imshow<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Gradient Y&quot;</span><span class=\"token punctuation\">,</span> gradient_y<span class=\"token punctuation\">)</span>\n</pre><p><img src=\"assets/sudokudemo.png\" alt></p>\n<p>The code above, extracted from <code>sobel_01.py</code> reinforces a couple of ideas that we&apos;ve been working on. It shows that:</p>\n<ul>\n<li>the <span class=\"mathjax-exps\">$G_x$</span> and <span class=\"mathjax-exps\">$G_y$</span>, gradients of the image, are computed separately through the convolution of two different Sobel kernels</li>\n<li><span class=\"mathjax-exps\">$G_x$</span> and <span class=\"mathjax-exps\">$G_y$</span> responded to the change in pixel values along the x-direction and y-direction respectively, as visualized in the illustration above</li>\n<li>convolution using the two Sobel filters may, and often will, produce a value outside the range of 0 and 255. Given the presence of [-1, -2, -1]  in one side of our kernel, mathematically this may lead to an output value of -1020. To store the values from these convolutions we use a 64-bit floating point (<code>cv2.CV_64F</code>). OpenCV suggests to &quot;keep the output datatype to some higher form such as <code>cv2.CV_64F</code>, take its absolute value and then convert back to <code>cv2.CV_8U</code>.<sup class=\"footnote-ref\"><a href=\"#fn3\" id=\"fnref3\">[3]</a></sup>&quot;</li>\n</ul>\n<p>While the code above certainly works, OpenCV also has a method that scales, calculates absolute values and converts the result to 8-bit. <code>cv2.convertScaleAbs(src, dst, alpha=1, beta=0)</code> performs the following:<br>\n</p><div class=\"mathjax-exps\">$$dst(I) = cast&lt;uchar&gt;(|src(I) * \\alpha + \\beta|)$$</div><p></p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">gradient_x <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>Sobel<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>CV_64F<span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> ksize<span class=\"token operator\">=</span><span class=\"token number\">3</span><span class=\"token punctuation\">)</span>\ngradient_y <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>Sobel<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>CV_64F<span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> ksize<span class=\"token operator\">=</span><span class=\"token number\">3</span><span class=\"token punctuation\">)</span>\n\ngradient_x <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>convertScaleAbs<span class=\"token punctuation\">(</span>gradient_x<span class=\"token punctuation\">)</span>\ngradient_y <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>convertScaleAbs<span class=\"token punctuation\">(</span>gradient_y<span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&quot;Range: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">min</span><span class=\"token punctuation\">(</span>gradient_x<span class=\"token punctuation\">)</span><span class=\"token punctuation\">}</span></span><span class=\"token string\"> | </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">max</span><span class=\"token punctuation\">(</span>gradient_x<span class=\"token punctuation\">)</span><span class=\"token punctuation\">}</span></span><span class=\"token string\">&quot;</span></span><span class=\"token punctuation\">)</span>\n</pre><h3 class=\"mume-header\" id=\"dive-deeper-gradient-orientation-magnitude\">Dive Deeper: Gradient Orientation &amp; Magnitude</h3>\n\n<p>At the beginning of this course I said that images are really just 2d functions before showing you the intricacies of our Sobel kernels. We saw the clever design of both the x- and y-direction kernels, by borrowing from the concept of &quot;taking the derivatives&quot; you often see in calculus text books.</p>\n<p>But on a really basic level, these kernels only return the x and y edge responses. These are <strong>not the image gradient</strong>, just pure arithmetic values from following the convolution process. To get to the final form (where the edges in our image are emphasized) we still need to compute the gradient direction and magnitude for each point in our image.</p>\n<p>This brings us back to our original formula. Recall that the x-direction and y-direction kernels are:</p>\n<p></p><div class=\"mathjax-exps\">$$G_x = \\begin{bmatrix} 1 &amp; 0 &amp; -1 \\\\ 2 &amp; 0 &amp; -2 \\\\ 1 &amp; 0 &amp; -1  \\end{bmatrix}  G_y = \\begin{bmatrix} 1 &amp; 2 &amp; 1 \\\\ 0 &amp; 0 &amp; 0 \\\\ -1 &amp; -2 &amp; -1  \\end{bmatrix}$$</div><p></p>\n<p>We understand that each kernel is applied separately to obtain the gradient component in each orientation, <span class=\"mathjax-exps\">$G_x$</span> and <span class=\"mathjax-exps\">$G_y$</span>. What is the significance of this? Well as it turns out if we know the shift in the x-direction and the corresponding change in value in the y-direction, then we can use the pythagorean theorem to approximate the &quot;length of the slope&quot;, a concept that many of you are familiar with.</p>\n<p>Expressed in formula, the gradient magnitude is hence:<br>\n</p><div class=\"mathjax-exps\">$$|G| = \\sqrt{G^2_x + G^2_y}$$</div><p></p>\n<p>Along with the well-known mathematical formula that is Pythagorean theorem, some of you may also have some familiarity with the three trigonometric functions. Particularly, the tangent function tells us that in a right triangle, the <strong>tangent of an angle is the length of the opposite side divided by the length of the adjacent side</strong>.</p>\n<p>This leads us to the following expression:<br>\n</p><div class=\"mathjax-exps\">$$tan(\\theta_{(x,y)})=\\frac{G_y}{G_x}$$</div><p></p>\n<p>To rewrite the expression above, we arrive at the formula to capture the gradient&apos;s direction:<br>\n</p><div class=\"mathjax-exps\">$$\\theta_{(x,y)}=tan^{-1}(\\frac{G_y}{G_x})$$</div><p></p>\n<p><img src=\"assets/2dfuncs.png\" alt></p>\n<p>This whole idea is also illustrated in code, and the script is provided to you:</p>\n<ul>\n<li><code>gradient.py</code> to generate the vector field in the picture above (right)</li>\n<li><code>img2surface.py</code> on the penguin image in the <code>assets</code> folder generates the surface plot</li>\n</ul>\n<p>Succinctly, supposed the two 3x3 kernels do not fire a response (for example when no edges are detected in the white background of our penguin), both <span class=\"mathjax-exps\">$G_x$</span> and <span class=\"mathjax-exps\">$G_y$</span> will be 0, which leads to a gradient magnitude of 0. You can compute these by hand, let OpenCV&apos;s implementation handle that for you, or use <code>numpy</code> as illustrated in <code>gradient.py</code>:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">dY<span class=\"token punctuation\">,</span> dX <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>gradient<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">)</span>\n</pre><h1 class=\"mume-header\" id=\"image-segmentation\">Image Segmentation</h1>\n\n<p>Image segmentation is the process of decomposing an image into parts for further analysis. This has many utility:</p>\n<ul>\n<li>Background subtraction in human motion analysis</li>\n<li>Multi-object classification</li>\n<li>Find region of interest for OCR (optical character recognition)</li>\n<li>Count pedestrians from a streamed video source</li>\n<li>Isolating vehicle registration plates (license plate) and vehicle models from a busy highway scene</li>\n</ul>\n<p>Current literature on image segmentation techniques can be classified into<sup class=\"footnote-ref\"><a href=\"#fn4\" id=\"fnref4\">[4]</a></sup>:</p>\n<ul>\n<li>Intensity-based segmentation</li>\n<li>Edge-based segmentation</li>\n<li>Region-based semantic segmentation</li>\n</ul>\n<p>It&apos;s important to note, however, that the rise in popularity of deep learning framework and techniques has ushered a proliferation of new methods to perform what was once a highly difficult task. In future lectures, we&apos;ll explore image segmentation in far greater details. In this course, we&apos;ll study intensity-based segmentation and edge-based segmentation methods.</p>\n<h2 class=\"mume-header\" id=\"intensity-based-segmentation\">Intensity-based Segmentation</h2>\n\n<p>Intensity-based method is perhaps the simplest as intensity is the simplest property that pixels can share.</p>\n<p>To make a more concrete case of this, let&apos;s assume you&apos;re working with a team of researchers to build an AI-based &quot;sudoku solver&quot; that, unimaginatively, will compete against human sudoku players in an attempt to further stake the claim in an ongoing debate of AI superiority.</p>\n<p>While your teammates work on the algorithmic design for the actual solver, your task is comparatively straightforward: write a script to scan newspaper images (or print magazines), binarize them to discard everything except the digits in the sudoku puzzle.</p>\n<p>This presents a great opportunity to use an intensity-based segmentation technique we spoke about earlier.</p>\n<p>In <code>intensitytresholding_01.py</code>, you&apos;ll find a code demonstration of the numerous thresholding methods provided by OpenCV. In total, there are 5 simple thresholding methods: <code>THRESH_BINARY</code>, <code>THRESH_BINARY_INV</code>, <code>THRESH_TRUNC</code>, <code>THRESH_TOZERO</code> and <code>THRESH_TOZERO_INV</code><sup class=\"footnote-ref\"><a href=\"#fn5\" id=\"fnref5\">[5]</a></sup>.</p>\n<h3 class=\"mume-header\" id=\"simple-thresholding\">Simple Thresholding</h3>\n\n<p>The method call between all of them are identical:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">cv2<span class=\"token punctuation\">.</span>threshold<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> thresh<span class=\"token punctuation\">,</span> maxval<span class=\"token punctuation\">,</span> <span class=\"token builtin\">type</span><span class=\"token punctuation\">)</span>\n</pre><p>We specify our source image <code>img</code> (usually in grayscale), a threshold value <code>thresh</code> used to binarize the image pixels, and a max value <code>maxval</code> for the pixel value to use for any pixel that crosses our threshold.</p>\n<p>The mathematical functions for each one of them:<br>\n<img src=\"assets/threshmethods.png\" alt></p>\n<p>They&apos;re collectively known as <strong>simple thresholding</strong> in OpenCV because they use a global threshold value; Any pixels smaller than the threshold is set to 0 otherwise it is set to the <code>maxval</code> value.</p>\n<p>The probably sound too simplistic for anything beyond the simplest of real-world images, and for the majority of cases they are. They call for proper judgment of the task at hand.</p>\n<p>Applying the various types of simple thresholding method on our sudoku image, we observe that the digits are for the most part extracted successfully while the background information are greatly reduced:</p>\n<p><img src=\"assets/sudoku_simple.png\" alt></p>\n<p>Refer to<code>intensitythresholding_01.py</code> for the full code.</p>\n<p>As a simple homework, try to practice <strong>simple thresholding</strong> on the <code>car2.png</code> located in your <code>homework</code> folder. To reduce noise, you may have to combine a blurring operation prior to thresholding. As you practice, pay attention to the interaction between your threshold values and the output. Later in the course, you&apos;ll learn how to draw contours, which would come in handy in producing the final output:</p>\n<p><img src=\"assets/cars_hw.png\" alt></p>\n<p>As you work on your homework, you will notice that given the varying lighting condition across the different region of our image, regardless of the global value we pick we either have a threshold value that is too low or too high.</p>\n<h3 class=\"mume-header\" id=\"adaptive-thresholding\">Adaptive Thresholding</h3>\n\n<p>Using a global value as an intensity threshold may work in particular cases but may be overly naive to perform well when, say, an image has different lighting conditions in different areas. A great example of this case is the object extraction exercise you performed using <code>car2.png</code>.</p>\n<p>Adaptive thresholding is not a lot different from the aforementioned thresholding techniques, except it determines the threshold for each pixel based on its neighborhood. This in effect mans that the image is assigned different thresholds across the different regions, leading to a cleaner output when our image has different degrees of illumination.</p>\n<p><img src=\"assets/cars_adaptive.png\" alt></p>\n<p>The method is called with the source image (<code>src</code>), a max value (<code>maxValue</code>), the method (<code>adaptiveMethod</code>), a threshold type (<code>thresholdType</code>), the size of the neighborhood (<code>blockSize</code>) and a constant (<code>C</code>) that is subtracted from the mean or the weightted sum of the neighborhood pixels.</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">mean_adaptive <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>adaptiveThreshold<span class=\"token punctuation\">(</span>\n    img<span class=\"token punctuation\">,</span> <span class=\"token number\">255</span><span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>ADAPTIVE_THRESH_MEAN_C<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>THRESH_BINARY<span class=\"token punctuation\">,</span> <span class=\"token number\">11</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span>\n<span class=\"token punctuation\">)</span>\ngaussian_adaptive <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>adaptiveThreshold<span class=\"token punctuation\">(</span>\n    img<span class=\"token punctuation\">,</span> <span class=\"token number\">255</span><span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>ADAPTIVE_THRESH_GAUSSIAN_C<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>THRESH_BINARY<span class=\"token punctuation\">,</span> <span class=\"token number\">11</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span>\n<span class=\"token punctuation\">)</span>\n</pre><p>The code, taken from <code>adaptivethresholding_01.py</code> produces the following:<br>\n<img src=\"assets/sudoku_binary.png\" alt></p>\n<h2 class=\"mume-header\" id=\"edge-based-contour-estimation\">Edge-based contour estimation</h2>\n\n<p>Edge-based segmentation separates foreground objects by first identifying all edges in our image. Sobel Operator and other gradient-based filter function are good and well-known candidates for such an operation.<sup class=\"footnote-ref\"><a href=\"#fn6\" id=\"fnref6\">[6]</a></sup></p>\n<p>Once we obtain the edges, we perform the contour approximation operation using the <code>findContours</code> method in OpenCV. But what exactly are contours?</p>\n<p>In OpenCV&apos;s words<sup class=\"footnote-ref\"><a href=\"#fn7\" id=\"fnref7\">[7]</a></sup>,</p>\n<blockquote>\n<p>Contours can be explained simply as a curve joining all the continuous points (along the boundary), having same color or intensity. The contours are a useful tool for shape analysis and object detection and recognition.</p>\n</blockquote>\n<p>If we have &quot;a curve joining all the continuous points along the boundary&quot;, then we are able to extract this object. If we wish to count the number of contours in our image, the method also convenient return a list of all the found contours, making it easy to perform <code>len()</code> on the list to retrieve the final value.</p>\n<p>There are three arguments to the <code>findContours()</code> function, first being the source image, second is the retrieval mode and last is the contour approximation method. Both the contour retrieval mode and approximation method is discussed in the next sub-section.</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token punctuation\">(</span>cnts<span class=\"token punctuation\">,</span> hierarchy<span class=\"token punctuation\">)</span> <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>findContours<span class=\"token punctuation\">(</span>\n    img<span class=\"token punctuation\">,</span>\n    cv2<span class=\"token punctuation\">.</span>RETR_EXTERNAL<span class=\"token punctuation\">,</span>\n    cv2<span class=\"token punctuation\">.</span>CHAIN_APPROX_SIMPLE<span class=\"token punctuation\">,</span>\n<span class=\"token punctuation\">)</span>\n</pre><p>The function returns the contours and hierarchy, with contours being a list of all the contours in the image. Each contour is a Numpy array of <code>(x,y)</code> coordinates of boundary points of the object, giving each contour a shape of <code>(n, x, y)</code>.</p>\n<p>What this allow us to do, is to combine the contours we retrieved with the <code>cv2.drawContours()</code> function either individually, exhaustively in a for-loop fashion, or just everything in one go.</p>\n<p>Assuming <code>img</code> being the image we want to draw our contours on, the following code demonstrates these different methods:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token comment\"># draw all contours</span>\ncv2<span class=\"token punctuation\">.</span>drawContours<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> cnts<span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">255</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token number\">3</span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># draw the 3rd contour</span>\ncv2<span class=\"token punctuation\">.</span>drawContours<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> cnts<span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">255</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token number\">3</span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># draw the first, fourth and fifth contour</span>\ncnt_selected <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span>cnts<span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> cnts<span class=\"token punctuation\">[</span><span class=\"token number\">3</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> cnts<span class=\"token punctuation\">[</span><span class=\"token number\">4</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span>\ncv2<span class=\"token punctuation\">.</span>drawContours<span class=\"token punctuation\">(</span>canvas<span class=\"token punctuation\">,</span> cnt_selected<span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">255</span><span class=\"token punctuation\">,</span> <span class=\"token number\">255</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># draw the fourth contour</span>\ncv2<span class=\"token punctuation\">.</span>drawContours<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> contours<span class=\"token punctuation\">,</span> <span class=\"token number\">3</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span><span class=\"token number\">255</span><span class=\"token punctuation\">,</span><span class=\"token number\">0</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token number\">3</span><span class=\"token punctuation\">)</span>\n</pre><p>The first argument to this function being the source image, the second is the contours as a Python list, the third is the index of contours and remaining arguments are color and thickness of contour lines respectively.</p>\n<p>One common problem beginners can run into is to perform the <code>findContours</code> operation on the grayscale image instead of the binary image, leading to poorer accuracy.</p>\n<p>When we execute <code>contour_01.py</code>, we notice that the <code>drawContour</code> operation yields the following output:</p>\n<p><img src=\"assets/handholding.png\" alt></p>\n<p>There are 5 occurrences where our <code>findContours</code> function incorrectly approximated the wrong contour because two penguins were too close to each other. When we execute <code>len(cnts)</code>, we will find that the returned value is 5 less than the actual count.</p>\n<p>Try to fix <code>contour_01.py</code> by performing the contour approximation on our binary image using the thresholding technique you&apos;ve learned in previous section.</p>\n<h3 class=\"mume-header\" id=\"contour-retrieval-and-approximation\">Contour Retrieval and Approximation</h3>\n\n<p>In the <code>findContours()</code> function call, we passed our image to <code>src</code> in the first argumet. The second argument is the contour retrieval mode, and there are documentation for 4 of them<sup class=\"footnote-ref\"><a href=\"#fn8\" id=\"fnref8\">[8]</a></sup>:</p>\n<ul>\n<li><code>RETR_EXTERNAL</code>: retrieves only the extreme outer contours (see image below for reference)</li>\n<li><code>RETR_LIST</code>: retrieves all contours without establishing any hierarchical relationships</li>\n<li><code>RETR_CCOMP</code>: retrieves all contours and organize them into a two-level hierarchy (external boundary + boundaries of the holes)</li>\n<li><code>RETR_TREE</code>: retrieves all of the contours and reconstructs a full hierarchy of nested contours</li>\n</ul>\n<p><img src=\"assets/outervsall.png\" alt></p>\n<p>In our case, we don&apos;t particularly care about the hierarchy, and so the second to fourth method all has the same effect. In other cases, you may experiment with a different contour retrieval method to obtain both the contours and the hierarchy for further processing.</p>\n<p>What about the last parameter passed to our <code>findContours</code> method?</p>\n<p>Recall that contours are just boundaries of a shape? In a sense, it is an array of <code>(x,y)</code> coordinates used to &quot;record&quot; the boundary of a shape. Given this collection of coordinates, we can then recreate the boundary of our shape. This begs the next question: how many set of coordinates do we need to store to recreate our boundary?</p>\n<p>Supposed we perform the <code>findContour</code> operation on an image of two rectangles, one method it may use to achieve that is to store as many points around these rectangle boxes as possible? When we set <code>cv2.CHAIN_APPROX_NONE</code>, that is in fact what the algorithm would do, resulting in 658 points around the border of the top rectangle:<br>\n<img src=\"homework/equal.png\" alt></p>\n<p>However, notice the more efficient solution would have been to store only the 4 coordinates at each corner of the rectangle. The contour is perfectly represented and recreated using just 4 points for each rectangle, resulting in a total number of 8 points compared to 1,316 points. <code>cv2.CHAIN_APPROX_SIMPLE</code><sup class=\"footnote-ref\"><a href=\"#fn9\" id=\"fnref9\">[9]</a></sup> is an implementation of this, and you can find the sample code below:</p>\n<p><img src=\"assets/approx.png\" alt></p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">cnts<span class=\"token punctuation\">,</span> _ <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>findContours<span class=\"token punctuation\">(</span>\n        <span class=\"token comment\"># does this need to be changed?</span>\n        edged<span class=\"token punctuation\">,</span>\n        cv2<span class=\"token punctuation\">.</span>RETR_EXTERNAL<span class=\"token punctuation\">,</span>\n        cv2<span class=\"token punctuation\">.</span>CHAIN_APPROX_SIMPLE<span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&quot;Cnts Simple Shape (1): </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>cnts<span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">.</span>shape<span class=\"token punctuation\">}</span></span><span class=\"token string\">&quot;</span></span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># return: Cnts Simple Shape (1): (4, 1, 2)</span>\n<span class=\"token comment\"># output of cnts[0]:</span>\n<span class=\"token comment\"># array([[[ 47, 179]],</span>\n<span class=\"token comment\">#       [[ 47, 259]],</span>\n<span class=\"token comment\">#       [[296, 259]],</span>\n<span class=\"token comment\">#       [[296, 179]]], dtype=int32)</span>\n\ncnts2<span class=\"token punctuation\">,</span> _ <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>findContours<span class=\"token punctuation\">(</span>\n        <span class=\"token comment\"># does this need to be changed?</span>\n        edged<span class=\"token punctuation\">,</span>\n        cv2<span class=\"token punctuation\">.</span>RETR_EXTERNAL<span class=\"token punctuation\">,</span>\n        cv2<span class=\"token punctuation\">.</span>CHAIN_APPROX_NONE<span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&quot;Cnts NoApprox Shape:</span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>cnts2<span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">.</span>shape<span class=\"token punctuation\">}</span></span><span class=\"token string\">&quot;</span></span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># Cnts NoApprox Shape:(658, 1, 2)</span>\n</pre><p>The full script for the experiment above is in <code>contourapprox.py</code>.</p>\n<p>You may, at this point, hop to the Learn By Building section to attempt your homework.</p>\n<h1 class=\"mume-header\" id=\"canny-edge-detector\">Canny Edge Detector</h1>\n\n<p>John Canny developed a multi-stage procedure that, some 30 years later, is &quot;still a state-of-the-art edge detector&quot;<sup class=\"footnote-ref\"><a href=\"#fn10\" id=\"fnref10\">[10]</a></sup>. Better edge detection algorithms usually require greater computational resources -- and consequently -- longer processing times -- or a greater number of parameters, in an area where algorithm speed is oftentimes the most important criteria. For the reasons above along with its general robustness, the canny edge algorithm has become one of the &quot;most important methods to find edges&quot; even in modern literature<sup class=\"footnote-ref\"><a href=\"#fn1\" id=\"fnref1:2\">[1:2]</a></sup>.</p>\n<p>I said it&apos;s a multi-stage procedure, because the technique as described in his original paper, <em>computational theory of edge detection</em>, works as follow<sup class=\"footnote-ref\"><a href=\"#fn11\" id=\"fnref11\">[11]</a></sup>:</p>\n<ol>\n<li>Gaussian smoothing\n<ul>\n<li>Noise reduction using a 5x5 Gaussian filter</li>\n</ul>\n</li>\n<li>Compute gradient magnitudes and angles</li>\n<li>Apply non-maximum suppression (NMS)\n<ul>\n<li>Suppress close-by edges that are non-maximal, leaving only local maxima as edges</li>\n</ul>\n</li>\n<li>Track edge by hysterisis\n<ul>\n<li>Suppress all other edges that are weak and not connected to strong edges and link the edges</li>\n</ul>\n</li>\n</ol>\n<p>Step (1) and (2) in the procedure above can be achieved using code we&apos;ve written so far in our Sobel Operator scripts. We use the Sobel mask filters to compute <span class=\"mathjax-exps\">$G_x$</span> and <span class=\"mathjax-exps\">$G_y$</span>, respectively the gradient component in each orientation. We then compute the gradient magnitude and the angle <span class=\"mathjax-exps\">$\\theta$</span>:</p>\n<p>Gradient magnitude:<br>\n</p><div class=\"mathjax-exps\">$$|G| = \\sqrt{G^2_x + G^2_y}$$</div><p></p>\n<p>And recall that the slope <span class=\"mathjax-exps\">$\\theta$</span> of the gradient is calculated as follow:<br>\n</p><div class=\"mathjax-exps\">$$\\theta(x,y)=tan^{-1}(\\frac{G_y}{G_x})$$</div><p></p>\n<h2 class=\"mume-header\" id=\"edge-thinning\">Edge Thinning</h2>\n\n<p>Step (3) in the procedure is another common technique in computer vision known as the non-maximum suppression (NMS). Let&apos;s begin by taking a look at the output of our Sobel edge detector from earlier exercises:<br>\n<img src=\"assets/sobeledges.png\" alt></p>\n<p>Notice as we zoom in on the output image, we can see the gradient-based method did create our strong edges, but it also created &quot;weak&quot; edges it find in our image. Because it is not a parameterized function -- the edge is computed using values of the gradient magnitude and direction -- we have to rely on an additional mechanism for the edge thinning operation with the criterion being one accurate response to any given edge<sup class=\"footnote-ref\"><a href=\"#fn12\" id=\"fnref12\">[12]</a></sup>.</p>\n<p>Non-maximum suppression help us obtain the strongest edge by suppressing all the gradient values, i.e. setting them to 0 except for the local maxima, which indicate locations with the sharpest change of intensity value. In the words of <code>OpenCV</code>:</p>\n<blockquote>\n<p>After getting gradient magnitude and direction, a full scan of image is done to remove any unwanted pixels which may not constitute the edge. For this, at every pixel, pixel is checked if it is a local maximum in its neighborhood in the direction of gradient. If point A is on the edge, and point B and C are in gradient directions, point A is checked with point B and C to see if it forms a local maximum. If so, it is considered for next stage, otherwise, it is suppressed (put to zero).</p>\n</blockquote>\n<p>The output of step (3) is a binary image with thin edges.</p>\n<p>The code<sup class=\"footnote-ref\"><a href=\"#fn13\" id=\"fnref13\">[13]</a></sup> demonstrates how you would code such an NMS for the purpose of canny edge detection.</p>\n<h2 class=\"mume-header\" id=\"hysterisis-thresholding\">Hysterisis Thresholding</h2>\n\n<p>The final step of this multi-stage algorithm decides which among all edges are really edges and which of them are not. It accomplishes this using two threshold values, specified when we call the <code>cv2.Canny()</code> function:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">canny <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>Canny<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> threshold1<span class=\"token operator\">=</span><span class=\"token number\">50</span><span class=\"token punctuation\">,</span> threshold2<span class=\"token operator\">=</span><span class=\"token number\">180</span><span class=\"token punctuation\">)</span>\n</pre><p>Any edges with an intensity gradient above <code>threshold2</code> are considered edges and any edges below <code>threshold1</code> are considered non-edges and so are suppressed.</p>\n<p>The edges that lie between these two values (in our code above, edges with intensity gradient between 50 and 180) are classified as edges <strong>if they are connected to sure-edge pixels</strong> (the ones above 180) otherwise they are also discarded.</p>\n<p>This stage also removes small pixels (&quot;noises&quot;) on the assumption that edges are long lines (&quot;connected&quot;).</p>\n<p>The full procedure is implemented in a single function, <code>cv2.Canny()</code> and the first three parameters are required, respectively being the input image, the first and second threshold value. <code>canny_01.py</code> implements this and compare that to the Sobel Edge detector we developed earlier:</p>\n<p><img src=\"assets/sobelvscanny.png\" alt></p>\n<h2 class=\"mume-header\" id=\"learn-by-building\">Learn By Building</h2>\n\n<p>In the <code>homework</code> directory, you&apos;ll find a picture of scattered lego bricks <code>lego.jpg</code>. Exactly the kind of stuff you don&apos;t want on your bedroom floor, as anyone living with kids at home would testify.</p>\n<p>Your job is to apply what you&apos;ve learned in this lesson to combine what you&apos;ve learned from the class in kernel convolutions and Edge Detection (<code>kernel.md</code>) to build a lego brick counter.</p>\n<p>Note that there are many ways you can build an edge detection. Given what you&apos;ve learned so far, there are at least 3 equally adequate routines you can apply for this particular problem set.</p>\n<p>For the sake of this exercise, your script should feature the use of a Sobel Operator (or a similar gradient-based edge detection method) since this is the main topic of this chapter.</p>\n<p><img src=\"homework/lego.jpg\" alt></p>\n<h2 class=\"mume-header\" id=\"references\">References</h2>\n\n<hr class=\"footnotes-sep\">\n<section class=\"footnotes\">\n<ol class=\"footnotes-list\">\n<li id=\"fn1\" class=\"footnote-item\"><p>S.Kaur, I.Singh, Comparison between Edge Detection Techniques, International Journal of Computer Applications, July 2016 <a href=\"#fnref1\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a> <a href=\"#fnref1:1\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a> <a href=\"#fnref1:2\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn2\" class=\"footnote-item\"><p>Carnegie Mellon University, Image Gradients and Gradient Filtering (16-385 Computer Vision) <a href=\"#fnref2\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn3\" class=\"footnote-item\"><p>Image Gradients, <a href=\"https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_gradients/py_gradients.html\">OpenCV Documentation</a> <a href=\"#fnref3\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn4\" class=\"footnote-item\"><p>University of Victoria, Electrical and Computer Engineering, Computer Vision: Image Segmentation <a href=\"#fnref4\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn5\" class=\"footnote-item\"><p>Image Thresholding, <a href=\"https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html\">OpenCV Documentation</a> <a href=\"#fnref5\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn6\" class=\"footnote-item\"><p>C.Leubner, A Framework for Segmentation and Contour Approximation in Computer-Vision Systems, 2002 <a href=\"#fnref6\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn7\" class=\"footnote-item\"><p>Contours: Getting Started, <a href=\"https://docs.opencv.org/trunk/d4/d73/tutorial_py_contours_begin.html\">OpenCV Documentation</a> <a href=\"#fnref7\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn8\" class=\"footnote-item\"><p>Structural Analysis and Shape Descriptors, <a href=\"https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga819779b9857cc2f8601e6526a3a5bc71\">OpenCV Documentation</a> <a href=\"#fnref8\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn9\" class=\"footnote-item\"><p>Contours Hierarchy, <a href=\"https://docs.opencv.org/trunk/d9/d8b/tutorial_py_contours_hierarchy.html\">OpenCV Documentation</a> <a href=\"#fnref9\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn10\" class=\"footnote-item\"><p>Shapiro, L. G. and Stockman, G. C, Computer Vision, London etc, 2001 <a href=\"#fnref10\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn11\" class=\"footnote-item\"><p>Bastan, M., Bukhari, S., and Breuel, T., Active Canny: Edge Detection and Recovery with Open Active Contour Models, Technical University of Kaiserslautern, 2016 <a href=\"#fnref11\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn12\" class=\"footnote-item\"><p>Maini, R. and Aggarwal, H., Study and Comparison of various Image Edge Detection Techniques, Internal Jounral of Image Processing (IJIP) <a href=\"#fnref12\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn13\" class=\"footnote-item\"><p><a href=\"https://github.com/onlyphantom/Canny-edge-detector/blob/master/nonmax_suppression.py\">Example code for NMS, github</a> <a href=\"#fnref13\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n</ol>\n</section>\n\n      </div>\n      <div class=\"md-sidebar-toc\"><ul>\n<li><a href=\"#definition\">Definition</a></li>\n<li><a href=\"#gradient-based-edge-detection\">Gradient-based Edge Detection</a>\n<ul>\n<li><a href=\"#sobel-operator\">Sobel Operator</a>\n<ul>\n<li><a href=\"#intuition-discrete-derivative\">Intuition: Discrete Derivative</a></li>\n<li><a href=\"#code-illustrations-sobel-operator\">Code Illustrations: Sobel Operator</a></li>\n<li><a href=\"#dive-deeper-gradient-orientation-magnitude\">Dive Deeper: Gradient Orientation &amp; Magnitude</a></li>\n</ul>\n</li>\n</ul>\n</li>\n<li><a href=\"#image-segmentation\">Image Segmentation</a>\n<ul>\n<li><a href=\"#intensity-based-segmentation\">Intensity-based Segmentation</a>\n<ul>\n<li><a href=\"#simple-thresholding\">Simple Thresholding</a></li>\n<li><a href=\"#adaptive-thresholding\">Adaptive Thresholding</a></li>\n</ul>\n</li>\n<li><a href=\"#edge-based-contour-estimation\">Edge-based contour estimation</a>\n<ul>\n<li><a href=\"#contour-retrieval-and-approximation\">Contour Retrieval and Approximation</a></li>\n</ul>\n</li>\n</ul>\n</li>\n<li><a href=\"#canny-edge-detector\">Canny Edge Detector</a>\n<ul>\n<li><a href=\"#edge-thinning\">Edge Thinning</a></li>\n<li><a href=\"#hysterisis-thresholding\">Hysterisis Thresholding</a></li>\n<li><a href=\"#learn-by-building\">Learn By Building</a></li>\n<li><a href=\"#references\">References</a></li>\n</ul>\n</li>\n</ul>\n</div>\n      <a id=\"sidebar-toc-btn\">&#x2261;</a>\n    \n    \n    \n    \n    \n    \n    \n    \n<script>\n\nvar sidebarTOCBtn = document.getElementById('sidebar-toc-btn')\nsidebarTOCBtn.addEventListener('click', function(event) {\n  event.stopPropagation()\n  if (document.body.hasAttribute('html-show-sidebar-toc')) {\n    document.body.removeAttribute('html-show-sidebar-toc')\n  } else {\n    document.body.setAttribute('html-show-sidebar-toc', true)\n  }\n})\n</script>\n      \n  \n    </body></html>"
  },
  {
    "path": "edgedetect/edgedetect.md",
    "content": "# Definition\nAn edge can be defined as boundary between regions in an image[^1]. Edge detection techniques we'll learn in this course builds upon what we've learned from our lessons in kernel convolution. It is the process of using kernels to reduce the information in our data and preserving only the necessary structural properties in our image[^1].\n\n# Gradient-based Edge Detection\nGradient points in the direction of the most rapid increase in intensity. When we apply a gradient based edge detection method, we are searching for the maximum and minimum in the first derivative of the image. \n\nWhen we apply our convolution onto the image, we are finding for regions in the image where there's a sharp change in intensity or color. Arguably the most common edge detection method using this approach is the Sobel Operator. \n\n## Sobel Operator\nThe `Sobel` operator applies a filtering operation to produce an image output where the edge is emphasized. It convolves our original image using two 3x3 kernels to capture approximations of the derivatives in both the horizontal and vertical directions.\n\nThe x-direction and y-direction kernels would be: \n\n$$G_x = \\begin{bmatrix} 1 & 0 & -1 \\\\ 2 & 0 & -2 \\\\ 1 & 0 & -1  \\end{bmatrix}\n G_y = \\begin{bmatrix} 1 & 2 & 1 \\\\ 0 & 0 & 0 \\\\ -1 & -2 & -1  \\end{bmatrix}\n$$\n\nEach kernel is applied separately to obtain the gradient component in each orientation, $G_x$ and $G_y$. Expressed in formula, the gradient magnitude is:\n$$|G| = \\sqrt{G^2_x + G^2_y} $$\n\nWhere the slope $\\theta$ of the gradient is calculated as follow:\n$$\\theta(x,y)=tan^{-1}(\\frac{G_y}{G_x})$$\n\nIf the two formula above confuses you, read on as we unpack these ideas one at a time. \n\n### Intuition: Discrete Derivative\nIn computer vision literature, you'll often hear about \"taking the derivative\" and this may erve as a source of confusion for beginning practitioners since \"derivatives\" is often thought of in the context of a continuous function. Images are a 2D matrix of discrete values, so how do we wrap our head around the idea of finding derivative?\n\nBut why do we even bother with derivatives when this course is suppopsed to be about edge detection in images? \n\n![](assets/derivatives.png)\n\nAmong the many ways to answer the question, my favorite being that image is really just a function. When it treat an image as a function, the utility of taking derivatives become a little more obvious. In the image below, supposed you want to count the number of windows in this area of Venezia Sestiere Cannaregio, your program can look for large derivatives since there are sharp changes in pixel intensity from the windows to the surrounding wall:\n\n![](assets/surface.png)\n\nThe code to generate the surface plot above is in `img2surface.py`.\n\nGoing back to our x-direction kernel in the Sobel Operator. \nThis kernel has all 0 in the middle, which is quite easy to intuit about. Essentially, for each pixel in our image, we want to compute its derivative in the x-direction by approximating a formula that you may have come across in your calculus class:\n\n$$f'(x) = \\lim_{h\\to0}\\frac{f(x+h)-f(x)}{h}$$\n\nThis approximation is also called 'forward difference', because we're taking a value of $x$, and computing the difference in $f(x)$ as we increment it by a small amount forward, denoted as $h$. \n\nAnd as it turns out, using the 'central difference' to compute the derivative of our discrete signal can deliver better results[^2]:\n\n$$f'(x) = \\lim_{h\\to0}\\frac{f(x+0.5h)-f(x-0.5h)}{h}$$\n\nTo make this more concrete, we can plug the formula into an actual array of pixels:\n\n$$[0, 255, 65, \\underline{180}, 255, 255, 255]$$\n\nwhen we set $h=2$ at the center pixel (index of value 180), we have the following:\n\n$$\\begin{aligned}\nf'(x) & = \\lim_{h\\to0}\\frac{f(x+0.5h)-f(x-0.5h)}{h}\\\\\n& = \\frac{f(x+1)-f(x-1)}{2} \\\\\n& = \\frac{255-65}{2} \\\\ \n& = 95 \\end{aligned}$$\n\nNotice that a large part of the calculation we just perform is synonymous to a 1D convolution operation using a $\\begin{bmatrix} -1 & 0 &  1 \\end{bmatrix}$ kernel. \n\nWhen the same 1x3 kernel $\\begin{bmatrix} -1 & 0 &  1 \\end{bmatrix}$ is applied on the right-most part of the image where its just white space ([..., 255, 255, 255]) the kernel would evaluate to 0. In other words, our derivative filter returns no response where it can't detect a sharp change in pixel intensity.\n\nAs a reminder, the x-direction kernel in our Sobel Operator is the following:\n$$G_x = \\begin{bmatrix} 1 & 0 & -1 \\\\ 2 & 0 & -2 \\\\ 1 & 0 & -1  \\end{bmatrix}$$\n\nThis takes our 1x3 kernel and instead of convolving one row of pixels at a time, extends it to convolve at 3x3 neighborhoods at a time using a weighted average approach.\n\n### Code Illustrations: Sobel Operator\nThe two kernels (one for horizontal and another for vertical edge detection) can be constructed, respectively, like the following:\n\n```py\nsobel_x = np.array([[1, 0, -1],\n                    [2, 0, -2],\n                    [1, 0, -1]])\n\nsobel_y = np.array([[1, 2, 1],\n                    [0, 0, 0],\n                    [-1, -2, -1]])\n```\n\nYou may have guessed that, given its role in digital image processing, `opencv` have included a method that performs our Sobel Operator for us, and thankfully there is. Here's an example of using the `cv2.Sobel(src, ddepth, dx, dy, dst=None, ksize)` method:\n\n```py\ngradient_x = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=3)\ngradient_y = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=3)\nprint(f\"Range: {np.min(gradient_x)} | {np.max(gradient_x)}\")\n# Range: -177.0 | 204.0\n\ngradient_x = np.uint8(np.absolute(gradient_x))\ngradient_y = np.uint8(np.absolute(gradient_y))\nprint(f\"Range uint8: {np.min(gradient_x)} | {np.max(gradient_x)}\")\n# Range uint8: 0 | 204\n\ncv2.imshow(\"Gradient X\", gradient_x)\ncv2.imshow(\"Gradient Y\", gradient_y)\n```\n![](assets/sudokudemo.png)\n\nThe code above, extracted from `sobel_01.py` reinforces a couple of ideas that we've been working on. It shows that:\n- the $G_x$ and $G_y$, gradients of the image, are computed separately through the convolution of two different Sobel kernels\n- $G_x$ and $G_y$ responded to the change in pixel values along the x-direction and y-direction respectively, as visualized in the illustration above\n- convolution using the two Sobel filters may, and often will, produce a value outside the range of 0 and 255. Given the presence of [-1, -2, -1]  in one side of our kernel, mathematically this may lead to an output value of -1020. To store the values from these convolutions we use a 64-bit floating point (`cv2.CV_64F`). OpenCV suggests to \"keep the output datatype to some higher form such as `cv2.CV_64F`, take its absolute value and then convert back to `cv2.CV_8U`.[^3]\"\n\nWhile the code above certainly works, OpenCV also has a method that scales, calculates absolute values and converts the result to 8-bit. `cv2.convertScaleAbs(src, dst, alpha=1, beta=0)` performs the following:\n$$dst(I) = cast<uchar>(|src(I) * \\alpha + \\beta|)$$\n\n```py\ngradient_x = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=3)\ngradient_y = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=3)\n\ngradient_x = cv2.convertScaleAbs(gradient_x)\ngradient_y = cv2.convertScaleAbs(gradient_y)\nprint(f\"Range: {np.min(gradient_x)} | {np.max(gradient_x)}\")\n```\n\n### Dive Deeper: Gradient Orientation & Magnitude\nAt the beginning of this course I said that images are really just 2d functions before showing you the intricacies of our Sobel kernels. We saw the clever design of both the x- and y-direction kernels, by borrowing from the concept of \"taking the derivatives\" you often see in calculus text books. \n\nBut on a really basic level, these kernels only return the x and y edge responses. These are **not the image gradient**, just pure arithmetic values from following the convolution process. To get to the final form (where the edges in our image are emphasized) we still need to compute the gradient direction and magnitude for each point in our image. \n\nThis brings us back to our original formula. Recall that the x-direction and y-direction kernels are: \n\n$$G_x = \\begin{bmatrix} 1 & 0 & -1 \\\\ 2 & 0 & -2 \\\\ 1 & 0 & -1  \\end{bmatrix}\n G_y = \\begin{bmatrix} 1 & 2 & 1 \\\\ 0 & 0 & 0 \\\\ -1 & -2 & -1  \\end{bmatrix}\n$$\n\nWe understand that each kernel is applied separately to obtain the gradient component in each orientation, $G_x$ and $G_y$. What is the significance of this? Well as it turns out if we know the shift in the x-direction and the corresponding change in value in the y-direction, then we can use the pythagorean theorem to approximate the \"length of the slope\", a concept that many of you are familiar with. \n\nExpressed in formula, the gradient magnitude is hence:\n$$|G| = \\sqrt{G^2_x + G^2_y} $$\n\nAlong with the well-known mathematical formula that is Pythagorean theorem, some of you may also have some familiarity with the three trigonometric functions. Particularly, the tangent function tells us that in a right triangle, the **tangent of an angle is the length of the opposite side divided by the length of the adjacent side**.\n\nThis leads us to the following expression:\n$$tan(\\theta_{(x,y)})=\\frac{G_y}{G_x}$$\n\nTo rewrite the expression above, we arrive at the formula to capture the gradient's direction:\n$$\\theta_{(x,y)}=tan^{-1}(\\frac{G_y}{G_x})$$\n\n![](assets/2dfuncs.png)\n\nThis whole idea is also illustrated in code, and the script is provided to you: \n- `gradient.py` to generate the vector field in the picture above (right)\n- `img2surface.py` on the penguin image in the `assets` folder generates the surface plot\n\nSuccinctly, supposed the two 3x3 kernels do not fire a response (for example when no edges are detected in the white background of our penguin), both $G_x$ and $G_y$ will be 0, which leads to a gradient magnitude of 0. You can compute these by hand, let OpenCV's implementation handle that for you, or use `numpy` as illustrated in `gradient.py`:\n\n```py\ndY, dX = np.gradient(img)\n```\n\n# Image Segmentation\nImage segmentation is the process of decomposing an image into parts for further analysis. This has many utility:\n\n- Background subtraction in human motion analysis\n- Multi-object classification\n- Find region of interest for OCR (optical character recognition)\n- Count pedestrians from a streamed video source\n- Isolating vehicle registration plates (license plate) and vehicle models from a busy highway scene\n\nCurrent literature on image segmentation techniques can be classified into[^4]:\n- Intensity-based segmentation\n- Edge-based segmentation\n- Region-based semantic segmentation\n\nIt's important to note, however, that the rise in popularity of deep learning framework and techniques has ushered a proliferation of new methods to perform what was once a highly difficult task. In future lectures, we'll explore image segmentation in far greater details. In this course, we'll study intensity-based segmentation and edge-based segmentation methods.\n\n## Intensity-based Segmentation\nIntensity-based method is perhaps the simplest as intensity is the simplest property that pixels can share. \n\nTo make a more concrete case of this, let's assume you're working with a team of researchers to build an AI-based \"sudoku solver\" that, unimaginatively, will compete against human sudoku players in an attempt to further stake the claim in an ongoing debate of AI superiority. \n\nWhile your teammates work on the algorithmic design for the actual solver, your task is comparatively straightforward: write a script to scan newspaper images (or print magazines), binarize them to discard everything except the digits in the sudoku puzzle.\n\nThis presents a great opportunity to use an intensity-based segmentation technique we spoke about earlier.\n\nIn `intensitytresholding_01.py`, you'll find a code demonstration of the numerous thresholding methods provided by OpenCV. In total, there are 5 simple thresholding methods: `THRESH_BINARY`, `THRESH_BINARY_INV`, `THRESH_TRUNC`, `THRESH_TOZERO` and `THRESH_TOZERO_INV`[^5]. \n\n### Simple Thresholding\nThe method call between all of them are identical:\n```py\ncv2.threshold(img, thresh, maxval, type)\n```\nWe specify our source image `img` (usually in grayscale), a threshold value `thresh` used to binarize the image pixels, and a max value `maxval` for the pixel value to use for any pixel that crosses our threshold. \n\nThe mathematical functions for each one of them:\n![](assets/threshmethods.png)\n\nThey're collectively known as **simple thresholding** in OpenCV because they use a global threshold value; Any pixels smaller than the threshold is set to 0 otherwise it is set to the `maxval` value. \n\nThe probably sound too simplistic for anything beyond the simplest of real-world images, and for the majority of cases they are. They call for proper judgment of the task at hand. \n\nApplying the various types of simple thresholding method on our sudoku image, we observe that the digits are for the most part extracted successfully while the background information are greatly reduced:\n\n![](assets/sudoku_simple.png)\n\nRefer to`intensitythresholding_01.py` for the full code. \n\nAs a simple homework, try to practice **simple thresholding** on the `car2.png` located in your `homework` folder. To reduce noise, you may have to combine a blurring operation prior to thresholding. As you practice, pay attention to the interaction between your threshold values and the output. Later in the course, you'll learn how to draw contours, which would come in handy in producing the final output:\n\n![](assets/cars_hw.png)\n\nAs you work on your homework, you will notice that given the varying lighting condition across the different region of our image, regardless of the global value we pick we either have a threshold value that is too low or too high. \n\n### Adaptive Thresholding\nUsing a global value as an intensity threshold may work in particular cases but may be overly naive to perform well when, say, an image has different lighting conditions in different areas. A great example of this case is the object extraction exercise you performed using `car2.png`.\n\nAdaptive thresholding is not a lot different from the aforementioned thresholding techniques, except it determines the threshold for each pixel based on its neighborhood. This in effect mans that the image is assigned different thresholds across the different regions, leading to a cleaner output when our image has different degrees of illumination.\n\n![](assets/cars_adaptive.png)\n\nThe method is called with the source image (`src`), a max value (`maxValue`), the method (`adaptiveMethod`), a threshold type (`thresholdType`), the size of the neighborhood (`blockSize`) and a constant (`C`) that is subtracted from the mean or the weightted sum of the neighborhood pixels. \n\n```py\nmean_adaptive = cv2.adaptiveThreshold(\n    img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 2\n)\ngaussian_adaptive = cv2.adaptiveThreshold(\n    img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2\n)\n```\n\nThe code, taken from `adaptivethresholding_01.py` produces the following:\n![](assets/sudoku_binary.png)\n\n## Edge-based contour estimation\nEdge-based segmentation separates foreground objects by first identifying all edges in our image. Sobel Operator and other gradient-based filter function are good and well-known candidates for such an operation.[^6] \n\nOnce we obtain the edges, we perform the contour approximation operation using the `findContours` method in OpenCV. But what exactly are contours?\n\nIn OpenCV's words[^7],\n> Contours can be explained simply as a curve joining all the continuous points (along the boundary), having same color or intensity. The contours are a useful tool for shape analysis and object detection and recognition.\n\nIf we have \"a curve joining all the continuous points along the boundary\", then we are able to extract this object. If we wish to count the number of contours in our image, the method also convenient return a list of all the found contours, making it easy to perform `len()` on the list to retrieve the final value.\n\nThere are three arguments to the `findContours()` function, first being the source image, second is the retrieval mode and last is the contour approximation method. Both the contour retrieval mode and approximation method is discussed in the next sub-section.\n```py\n(cnts, hierarchy) = cv2.findContours(\n    img,\n    cv2.RETR_EXTERNAL,\n    cv2.CHAIN_APPROX_SIMPLE,\n)\n```\nThe function returns the contours and hierarchy, with contours being a list of all the contours in the image. Each contour is a Numpy array of `(x,y)` coordinates of boundary points of the object, giving each contour a shape of `(n, x, y)`.\n\n\nWhat this allow us to do, is to combine the contours we retrieved with the `cv2.drawContours()` function either individually, exhaustively in a for-loop fashion, or just everything in one go.\n\nAssuming `img` being the image we want to draw our contours on, the following code demonstrates these different methods:\n```py\n# draw all contours\ncv2.drawContours(img, cnts, -1, (0,255,0), 3)\n# draw the 3rd contour\ncv2.drawContours(img, cnts, 2, (0,255,0), 3)\n# draw the first, fourth and fifth contour\ncnt_selected = [cnts[0], cnts[3], cnts[4]]\ncv2.drawContours(canvas, cnt_selected, -1, (0, 255, 255), 1)\n# draw the fourth contour\ncv2.drawContours(img, contours, 3, (0,255,0), 3)\n```\nThe first argument to this function being the source image, the second is the contours as a Python list, the third is the index of contours and remaining arguments are color and thickness of contour lines respectively.\n\nOne common problem beginners can run into is to perform the `findContours` operation on the grayscale image instead of the binary image, leading to poorer accuracy.\n\nWhen we execute `contour_01.py`, we notice that the `drawContour` operation yields the following output:\n\n![](assets/handholding.png)\n\nThere are 5 occurrences where our `findContours` function incorrectly approximated the wrong contour because two penguins were too close to each other. When we execute `len(cnts)`, we will find that the returned value is 5 less than the actual count. \n\nTry to fix `contour_01.py` by performing the contour approximation on our binary image using the thresholding technique you've learned in previous section.  \n\n### Contour Retrieval and Approximation\nIn the `findContours()` function call, we passed our image to `src` in the first argumet. The second argument is the contour retrieval mode, and there are documentation for 4 of them[^8]:\n- `RETR_EXTERNAL`: retrieves only the extreme outer contours (see image below for reference)\n- `RETR_LIST`: retrieves all contours without establishing any hierarchical relationships\n- `RETR_CCOMP`: retrieves all contours and organize them into a two-level hierarchy (external boundary + boundaries of the holes)\n- `RETR_TREE`: retrieves all of the contours and reconstructs a full hierarchy of nested contours\n\n![](assets/outervsall.png)\n\nIn our case, we don't particularly care about the hierarchy, and so the second to fourth method all has the same effect. In other cases, you may experiment with a different contour retrieval method to obtain both the contours and the hierarchy for further processing.\n\nWhat about the last parameter passed to our `findContours` method? \n\nRecall that contours are just boundaries of a shape? In a sense, it is an array of `(x,y)` coordinates used to \"record\" the boundary of a shape. Given this collection of coordinates, we can then recreate the boundary of our shape. This begs the next question: how many set of coordinates do we need to store to recreate our boundary?\n\nSupposed we perform the `findContour` operation on an image of two rectangles, one method it may use to achieve that is to store as many points around these rectangle boxes as possible? When we set `cv2.CHAIN_APPROX_NONE`, that is in fact what the algorithm would do, resulting in 658 points around the border of the top rectangle:\n![](homework/equal.png)\n\nHowever, notice the more efficient solution would have been to store only the 4 coordinates at each corner of the rectangle. The contour is perfectly represented and recreated using just 4 points for each rectangle, resulting in a total number of 8 points compared to 1,316 points. `cv2.CHAIN_APPROX_SIMPLE`[^9] is an implementation of this, and you can find the sample code below: \n\n![](assets/approx.png)\n\n```py\ncnts, _ = cv2.findContours(\n        # does this need to be changed?\n        edged,\n        cv2.RETR_EXTERNAL,\n        cv2.CHAIN_APPROX_SIMPLE,\n    )\nprint(f\"Cnts Simple Shape (1): {cnts[0].shape}\")\n# return: Cnts Simple Shape (1): (4, 1, 2)\n# output of cnts[0]:\n# array([[[ 47, 179]],\n#       [[ 47, 259]],\n#       [[296, 259]],\n#       [[296, 179]]], dtype=int32)\n\ncnts2, _ = cv2.findContours(\n        # does this need to be changed?\n        edged,\n        cv2.RETR_EXTERNAL,\n        cv2.CHAIN_APPROX_NONE,\n    )\nprint(f\"Cnts NoApprox Shape:{cnts2[0].shape}\")\n# Cnts NoApprox Shape:(658, 1, 2)\n```\nThe full script for the experiment above is in `contourapprox.py`.\n\nYou may, at this point, hop to the Learn By Building section to attempt your homework.\n\n# Canny Edge Detector\nJohn Canny developed a multi-stage procedure that, some 30 years later, is \"still a state-of-the-art edge detector\"[^10]. Better edge detection algorithms usually require greater computational resources -- and consequently -- longer processing times -- or a greater number of parameters, in an area where algorithm speed is oftentimes the most important criteria. For the reasons above along with its general robustness, the canny edge algorithm has become one of the \"most important methods to find edges\" even in modern literature[^1].\n\nI said it's a multi-stage procedure, because the technique as described in his original paper, _computational theory of edge detection_, works as follow[^11]:\n1. Gaussian smoothing\n    - Noise reduction using a 5x5 Gaussian filter\n2. Compute gradient magnitudes and angles\n3. Apply non-maximum suppression (NMS) \n    - Suppress close-by edges that are non-maximal, leaving only local maxima as edges\n4. Track edge by hysteresis\n    - Suppress all other edges that are weak and not connected to strong edges and link the edges\n\nStep (1) and (2) in the procedure above can be achieved using code we've written so far in our Sobel Operator scripts. We use the Sobel mask filters to compute $G_x$ and $G_y$, respectively the gradient component in each orientation. We then compute the gradient magnitude and the angle $\\theta$:\n\nGradient magnitude:\n$$|G| = \\sqrt{G^2_x + G^2_y} $$\n\nAnd recall that the slope $\\theta$ of the gradient is calculated as follow:\n$$\\theta(x,y)=tan^{-1}(\\frac{G_y}{G_x})$$\n\n## Edge Thinning\nStep (3) in the procedure is another common technique in computer vision known as the non-maximum suppression (NMS). Let's begin by taking a look at the output of our Sobel edge detector from earlier exercises:\n![](assets/sobeledges.png)\n\nNotice as we zoom in on the output image, we can see the gradient-based method did create our strong edges, but it also created \"weak\" edges it find in our image. Because it is not a parameterized function -- the edge is computed using values of the gradient magnitude and direction -- we have to rely on an additional mechanism for the edge thinning operation with the criterion being one accurate response to any given edge[^12].\n\nNon-maximum suppression help us obtain the strongest edge by suppressing all the gradient values, i.e. setting them to 0 except for the local maxima, which indicate locations with the sharpest change of intensity value. In the words of `OpenCV`:\n> After getting gradient magnitude and direction, a full scan of image is done to remove any unwanted pixels which may not constitute the edge. For this, at every pixel, pixel is checked if it is a local maximum in its neighborhood in the direction of gradient. If point A is on the edge, and point B and C are in gradient directions, point A is checked with point B and C to see if it forms a local maximum. If so, it is considered for next stage, otherwise, it is suppressed (put to zero).\n\nThe output of step (3) is a binary image with thin edges.\n\nThe code[^13] demonstrates how you would code such an NMS for the purpose of canny edge detection. \n\n## Hysteresis Thresholding\nThe final step of this multi-stage algorithm decides which among all edges are really edges and which of them are not. It accomplishes this using two threshold values, specified when we call the `cv2.Canny()` function:\n\n```py\ncanny = cv2.Canny(img, threshold1=50, threshold2=180)\n```\n\nAny edges with an intensity gradient above `threshold2` are considered edges and any edges below `threshold1` are considered non-edges and so are suppressed. \n\nThe edges that lie between these two values (in our code above, edges with intensity gradient between 50 and 180) are classified as edges **if they are connected to sure-edge pixels** (the ones above 180) otherwise they are also discarded.\n\nThis stage also removes small pixels (\"noises\") on the assumption that edges are long lines (\"connected\").\n\nThe full procedure is implemented in a single function, `cv2.Canny()` and the first three parameters are required, respectively being the input image, the first and second threshold value. `canny_01.py` implements this and compare that to the Sobel Edge detector we developed earlier:\n\n![](assets/sobelvscanny.png)\n\n## Learn By Building\nIn the `homework` directory, you'll find a picture of scattered lego bricks `lego.jpg`. Exactly the kind of stuff you don't want on your bedroom floor, as anyone living with kids at home would testify. \n\nYour job is to apply what you've learned in this lesson to combine what you've learned from the class in kernel convolutions and Edge Detection (`kernel.md`) to build a lego brick counter.\n\nNote that there are many ways you can build an edge detection. Given what you've learned so far, there are at least 3 equally adequate routines you can apply for this particular problem set. \n\nFor the sake of this exercise, your script should feature the use of a Sobel Operator (or a similar gradient-based edge detection method) since this is the main topic of this chapter. \n\n![](homework/lego.jpg)\n\n\n## References\n[^1]: S.Kaur, I.Singh, Comparison between Edge Detection Techniques, International Journal of Computer Applications, July 2016\n\n[^2]: Carnegie Mellon University, Image Gradients and Gradient Filtering (16-385 Computer Vision) \n\n[^3]: Image Gradients, [OpenCV Documentation](https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_gradients/py_gradients.html)\n\n[^4]: University of Victoria, Electrical and Computer Engineering, Computer Vision: Image Segmentation\n\n[^5]: Image Thresholding, [OpenCV Documentation](https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html)\n\n[^6]: C.Leubner, A Framework for Segmentation and Contour Approximation in Computer-Vision Systems, 2002\n\n[^7]: Contours: Getting Started, [OpenCV Documentation](https://docs.opencv.org/trunk/d4/d73/tutorial_py_contours_begin.html)\n\n[^8]: Structural Analysis and Shape Descriptors, [OpenCV Documentation](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga819779b9857cc2f8601e6526a3a5bc71)\n\n[^9]: Contours Hierarchy, [OpenCV Documentation](https://docs.opencv.org/trunk/d9/d8b/tutorial_py_contours_hierarchy.html)\n\n[^10]: Shapiro, L. G. and Stockman, G. C, Computer Vision, London etc, 2001\n\n[^11]: Bastan, M., Bukhari, S., and Breuel, T., Active Canny: Edge Detection and Recovery with Open Active Contour Models, Technical University of Kaiserslautern, 2016\n\n[^12]: Maini, R. and Aggarwal, H., Study and Comparison of various Image Edge Detection Techniques, Internal Jounral of Image Processing (IJIP)\n\n[^13]: [Example code for NMS, github](https://github.com/onlyphantom/Canny-edge-detector/blob/master/nonmax_suppression.py)\n\n"
  },
  {
    "path": "edgedetect/gaussianblur_01.py",
    "content": "import numpy as np\nimport cv2\n\nKERNEL_SIZE = (5, 5)\n\nimg = cv2.imread(\"assets/canal.png\")\n\nmeanblurred = cv2.blur(img, KERNEL_SIZE)\ngaussianblurred = cv2.GaussianBlur(src=img, ksize=KERNEL_SIZE, sigmaX=0)\n\ncv2.imshow(\"Mean Blurred\", meanblurred)\ncv2.waitKey(0)\ncv2.imshow(\"Gaussian Blurred\", gaussianblurred)\ncv2.waitKey(0)\n\n"
  },
  {
    "path": "edgedetect/gradient.py",
    "content": "import numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport cv2\n\nimg = cv2.imread(\"assets/pen.jpg\")\nflat = (img[:, :, 0] + img[:, :, 1] + img[:, :, 2]) / 3\nprint(flat.shape)\nsa = 16  # sample at every 16\n\n\nfig, ax = plt.subplots(1, 1)\nret = ax.imshow(\n    flat, zorder=0, alpha=1.0, cmap=\"Greys_r\", origin=\"upper\", interpolation=\"hermite\",\n)\nplt.colorbar(ret)\nY, X = np.mgrid[0 : flat.shape[0] : sa, 0 : flat.shape[1] : sa]\ndY, dX = np.gradient(flat[::sa, ::sa])\nax.quiver(X, Y, dX, dY, color=\"r\")\nplt.show()\n\n"
  },
  {
    "path": "edgedetect/img2surface.py",
    "content": "import numpy as np\nimport cv2\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nimg = cv2.imread(\"assets/sarpi.png\")\nblurred = cv2.GaussianBlur(img, (7, 7), 0)\nblurred = cv2.cvtColor(blurred, cv2.COLOR_BGR2GRAY)\n\n\nsize = 0.3\nwidth = int(blurred.shape[1] * size)\nheight = int(blurred.shape[0] * size)\nblurred = cv2.resize(blurred, (width, height), interpolation=cv2.INTER_AREA)\n\nprint(f\"Shape:{blurred.shape}\")\ncv2.imshow(\"Blurred\", blurred)\ncv2.waitKey(0)\n\nxx, yy = np.mgrid[0 : blurred.shape[0], 0 : blurred.shape[1]]\n\nfig = plt.figure()\nax = fig.gca(projection=\"3d\")\nax.plot_surface(xx, yy, blurred, rstride=1, cstride=1, cmap=plt.cm.gray, linewidth=0)\n\nplt.show()\n"
  },
  {
    "path": "edgedetect/intensitythresholding_01.py",
    "content": "import cv2\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimg = cv2.imread(\"assets/sudoku.jpg\", flags=0)\n\n_, img_threshold = cv2.threshold(img, 50, 255, cv2.THRESH_BINARY)\n_, img_trunc = cv2.threshold(img, 90, 255, cv2.THRESH_TRUNC)\n_, img_tozero = cv2.threshold(img, 55, 255, cv2.THRESH_TOZERO_INV)\n\nplt.subplot(2, 2, 1), plt.imshow(img, cmap=\"gray\")\nplt.title(\"Original\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 2), plt.imshow(img_threshold, cmap=\"gray\")\nplt.title(\"Binary Threshold\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 3), plt.imshow(img_trunc, cmap=\"gray\")\nplt.title(\"To Zero Threshold\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 4), plt.imshow(img_tozero, cmap=\"gray\")\nplt.title(\"To Zero\"), plt.xticks([]), plt.yticks([])\nplt.show()\n\n"
  },
  {
    "path": "edgedetect/kernel.html",
    "content": "<!DOCTYPE html><html><head>\n      <title>kernel</title>\n      <meta charset=\"utf-8\">\n      <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n      \n      \n        <script type=\"text/x-mathjax-config\">\n          MathJax.Hub.Config({\"extensions\":[\"tex2jax.js\"],\"jax\":[\"input/TeX\",\"output/HTML-CSS\"],\"messageStyle\":\"none\",\"tex2jax\":{\"processEnvironments\":false,\"processEscapes\":true,\"inlineMath\":[[\"$\",\"$\"],[\"\\\\(\",\"\\\\)\"]],\"displayMath\":[[\"$$\",\"$$\"],[\"\\\\[\",\"\\\\]\"]]},\"TeX\":{\"extensions\":[\"AMSmath.js\",\"AMSsymbols.js\",\"noErrors.js\",\"noUndefined.js\"]},\"HTML-CSS\":{\"availableFonts\":[\"TeX\"]}});\n        </script>\n        <script type=\"text/javascript\" async src=\"file:////Users/samuel/.vscode/extensions/shd101wyy.markdown-preview-enhanced-0.5.0/node_modules/@shd101wyy/mume/dependencies/mathjax/MathJax.js\" charset=\"UTF-8\"></script>\n        \n      \n      \n\n      \n      \n      \n      \n      \n      \n      \n\n      <style>\n      /**\n * prism.js Github theme based on GitHub's theme.\n * @author Sam Clarke\n */\ncode[class*=\"language-\"],\npre[class*=\"language-\"] {\n  color: #333;\n  background: none;\n  font-family: Consolas, \"Liberation Mono\", Menlo, Courier, monospace;\n  text-align: left;\n  white-space: pre;\n  word-spacing: normal;\n  word-break: normal;\n  word-wrap: normal;\n  line-height: 1.4;\n\n  -moz-tab-size: 8;\n  -o-tab-size: 8;\n  tab-size: 8;\n\n  -webkit-hyphens: none;\n  -moz-hyphens: none;\n  -ms-hyphens: none;\n  hyphens: none;\n}\n\n/* Code blocks */\npre[class*=\"language-\"] {\n  padding: .8em;\n  overflow: auto;\n  /* border: 1px solid #ddd; */\n  border-radius: 3px;\n  /* background: #fff; */\n  background: #f5f5f5;\n}\n\n/* Inline code */\n:not(pre) > code[class*=\"language-\"] {\n  padding: .1em;\n  border-radius: .3em;\n  white-space: normal;\n  background: #f5f5f5;\n}\n\n.token.comment,\n.token.blockquote {\n  color: #969896;\n}\n\n.token.cdata {\n  color: #183691;\n}\n\n.token.doctype,\n.token.punctuation,\n.token.variable,\n.token.macro.property {\n  color: #333;\n}\n\n.token.operator,\n.token.important,\n.token.keyword,\n.token.rule,\n.token.builtin {\n  color: #a71d5d;\n}\n\n.token.string,\n.token.url,\n.token.regex,\n.token.attr-value {\n  color: #183691;\n}\n\n.token.property,\n.token.number,\n.token.boolean,\n.token.entity,\n.token.atrule,\n.token.constant,\n.token.symbol,\n.token.command,\n.token.code {\n  color: #0086b3;\n}\n\n.token.tag,\n.token.selector,\n.token.prolog {\n  color: #63a35c;\n}\n\n.token.function,\n.token.namespace,\n.token.pseudo-element,\n.token.class,\n.token.class-name,\n.token.pseudo-class,\n.token.id,\n.token.url-reference .token.variable,\n.token.attr-name {\n  color: #795da3;\n}\n\n.token.entity {\n  cursor: help;\n}\n\n.token.title,\n.token.title .token.punctuation {\n  font-weight: bold;\n  color: #1d3e81;\n}\n\n.token.list {\n  color: #ed6a43;\n}\n\n.token.inserted {\n  background-color: #eaffea;\n  color: #55a532;\n}\n\n.token.deleted {\n  background-color: #ffecec;\n  color: #bd2c00;\n}\n\n.token.bold {\n  font-weight: bold;\n}\n\n.token.italic {\n  font-style: italic;\n}\n\n\n/* JSON */\n.language-json .token.property {\n  color: #183691;\n}\n\n.language-markup .token.tag .token.punctuation {\n  color: #333;\n}\n\n/* CSS */\ncode.language-css,\n.language-css .token.function {\n  color: #0086b3;\n}\n\n/* YAML */\n.language-yaml .token.atrule {\n  color: #63a35c;\n}\n\ncode.language-yaml {\n  color: #183691;\n}\n\n/* Ruby */\n.language-ruby .token.function {\n  color: #333;\n}\n\n/* Markdown */\n.language-markdown .token.url {\n  color: #795da3;\n}\n\n/* Makefile */\n.language-makefile .token.symbol {\n  color: #795da3;\n}\n\n.language-makefile .token.variable {\n  color: #183691;\n}\n\n.language-makefile .token.builtin {\n  color: #0086b3;\n}\n\n/* Bash */\n.language-bash .token.keyword {\n  color: #0086b3;\n}\n\n/* highlight */\npre[data-line] {\n  position: relative;\n  padding: 1em 0 1em 3em;\n}\npre[data-line] .line-highlight-wrapper {\n  position: absolute;\n  top: 0;\n  left: 0;\n  background-color: transparent;\n  display: block;\n  width: 100%;\n}\n\npre[data-line] .line-highlight {\n  position: absolute;\n  left: 0;\n  right: 0;\n  padding: inherit 0;\n  margin-top: 1em;\n  background: hsla(24, 20%, 50%,.08);\n  background: linear-gradient(to right, hsla(24, 20%, 50%,.1) 70%, hsla(24, 20%, 50%,0));\n  pointer-events: none;\n  line-height: inherit;\n  white-space: pre;\n}\n\npre[data-line] .line-highlight:before, \npre[data-line] .line-highlight[data-end]:after {\n  content: attr(data-start);\n  position: absolute;\n  top: .4em;\n  left: .6em;\n  min-width: 1em;\n  padding: 0 .5em;\n  background-color: hsla(24, 20%, 50%,.4);\n  color: hsl(24, 20%, 95%);\n  font: bold 65%/1.5 sans-serif;\n  text-align: center;\n  vertical-align: .3em;\n  border-radius: 999px;\n  text-shadow: none;\n  box-shadow: 0 1px white;\n}\n\npre[data-line] .line-highlight[data-end]:after {\n  content: attr(data-end);\n  top: auto;\n  bottom: .4em;\n}html body{font-family:\"Helvetica Neue\",Helvetica,\"Segoe UI\",Arial,freesans,sans-serif;font-size:16px;line-height:1.6;color:#333;background-color:#fff;overflow:initial;box-sizing:border-box;word-wrap:break-word}html body>:first-child{margin-top:0}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{line-height:1.2;margin-top:1em;margin-bottom:16px;color:#000}html body h1{font-size:2.25em;font-weight:300;padding-bottom:.3em}html body h2{font-size:1.75em;font-weight:400;padding-bottom:.3em}html body h3{font-size:1.5em;font-weight:500}html body h4{font-size:1.25em;font-weight:600}html body h5{font-size:1.1em;font-weight:600}html body h6{font-size:1em;font-weight:600}html body h1,html body h2,html body h3,html body h4,html body h5{font-weight:600}html body h5{font-size:1em}html body h6{color:#5c5c5c}html body strong{color:#000}html body del{color:#5c5c5c}html body a:not([href]){color:inherit;text-decoration:none}html body a{color:#08c;text-decoration:none}html body a:hover{color:#00a3f5;text-decoration:none}html body img{max-width:100%}html body>p{margin-top:0;margin-bottom:16px;word-wrap:break-word}html body>ul,html body>ol{margin-bottom:16px}html body ul,html body ol{padding-left:2em}html body ul.no-list,html body ol.no-list{padding:0;list-style-type:none}html body ul ul,html body ul ol,html body ol ol,html body ol ul{margin-top:0;margin-bottom:0}html body li{margin-bottom:0}html body li.task-list-item{list-style:none}html body li>p{margin-top:0;margin-bottom:0}html body .task-list-item-checkbox{margin:0 .2em .25em -1.8em;vertical-align:middle}html body .task-list-item-checkbox:hover{cursor:pointer}html body blockquote{margin:16px 0;font-size:inherit;padding:0 15px;color:#5c5c5c;border-left:4px solid #d6d6d6}html body blockquote>:first-child{margin-top:0}html body blockquote>:last-child{margin-bottom:0}html body hr{height:4px;margin:32px 0;background-color:#d6d6d6;border:0 none}html body table{margin:10px 0 15px 0;border-collapse:collapse;border-spacing:0;display:block;width:100%;overflow:auto;word-break:normal;word-break:keep-all}html body table th{font-weight:bold;color:#000}html body table td,html body table th{border:1px solid #d6d6d6;padding:6px 13px}html body dl{padding:0}html body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:bold}html body dl dd{padding:0 16px;margin-bottom:16px}html body code{font-family:Menlo,Monaco,Consolas,'Courier New',monospace;font-size:.85em !important;color:#000;background-color:#f0f0f0;border-radius:3px;padding:.2em 0}html body code::before,html body code::after{letter-spacing:-0.2em;content:\"\\00a0\"}html body pre>code{padding:0;margin:0;font-size:.85em !important;word-break:normal;white-space:pre;background:transparent;border:0}html body .highlight{margin-bottom:16px}html body .highlight pre,html body pre{padding:1em;overflow:auto;font-size:.85em !important;line-height:1.45;border:#d6d6d6;border-radius:3px}html body .highlight pre{margin-bottom:0;word-break:normal}html body pre code,html body pre tt{display:inline;max-width:initial;padding:0;margin:0;overflow:initial;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}html body pre code:before,html body pre tt:before,html body pre code:after,html body pre tt:after{content:normal}html body p,html body blockquote,html body ul,html body ol,html body dl,html body pre{margin-top:0;margin-bottom:16px}html body kbd{color:#000;border:1px solid #d6d6d6;border-bottom:2px solid #c7c7c7;padding:2px 4px;background-color:#f0f0f0;border-radius:3px}@media print{html body{background-color:#fff}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{color:#000;page-break-after:avoid}html body blockquote{color:#5c5c5c}html body pre{page-break-inside:avoid}html body table{display:table}html body img{display:block;max-width:100%;max-height:100%}html body pre,html body code{word-wrap:break-word;white-space:pre}}.markdown-preview{width:100%;height:100%;box-sizing:border-box}.markdown-preview .pagebreak,.markdown-preview .newpage{page-break-before:always}.markdown-preview pre.line-numbers{position:relative;padding-left:3.8em;counter-reset:linenumber}.markdown-preview pre.line-numbers>code{position:relative}.markdown-preview pre.line-numbers .line-numbers-rows{position:absolute;pointer-events:none;top:1em;font-size:100%;left:0;width:3em;letter-spacing:-1px;border-right:1px solid #999;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.markdown-preview pre.line-numbers .line-numbers-rows>span{pointer-events:none;display:block;counter-increment:linenumber}.markdown-preview pre.line-numbers .line-numbers-rows>span:before{content:counter(linenumber);color:#999;display:block;padding-right:.8em;text-align:right}.markdown-preview .mathjax-exps .MathJax_Display{text-align:center !important}.markdown-preview:not([for=\"preview\"]) .code-chunk .btn-group{display:none}.markdown-preview:not([for=\"preview\"]) .code-chunk .status{display:none}.markdown-preview:not([for=\"preview\"]) .code-chunk .output-div{margin-bottom:16px}.scrollbar-style::-webkit-scrollbar{width:8px}.scrollbar-style::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}.scrollbar-style::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for=\"html-export\"]:not([data-presentation-mode]){position:relative;width:100%;height:100%;top:0;left:0;margin:0;padding:0;overflow:auto}html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{position:relative;top:0}@media screen and (min-width:914px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{padding:2em calc(50% - 457px + 2em)}}@media screen and (max-width:914px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{font-size:14px !important;padding:1em}}@media print{html body[for=\"html-export\"]:not([data-presentation-mode]) #sidebar-toc-btn{display:none}}html body[for=\"html-export\"]:not([data-presentation-mode]) #sidebar-toc-btn{position:fixed;bottom:8px;left:8px;font-size:28px;cursor:pointer;color:inherit;z-index:99;width:32px;text-align:center;opacity:.4}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] #sidebar-toc-btn{opacity:1}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc{position:fixed;top:0;left:0;width:300px;height:100%;padding:32px 0 48px 0;font-size:14px;box-shadow:0 0 4px rgba(150,150,150,0.33);box-sizing:border-box;overflow:auto;background-color:inherit}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar{width:8px}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc a{text-decoration:none}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{padding:0 1.6em;margin-top:.8em}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc li{margin-bottom:.8em}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{list-style-type:none}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{left:300px;width:calc(100% -  300px);padding:2em calc(50% - 457px -  150px);margin:0;box-sizing:border-box}@media screen and (max-width:1274px){html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{width:100%}}html body[for=\"html-export\"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .markdown-preview{left:50%;transform:translateX(-50%)}html body[for=\"html-export\"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .md-sidebar-toc{display:none}\n/* Please visit the URL below for more information: */\n/*   https://shd101wyy.github.io/markdown-preview-enhanced/#/customize-css */\n.markdown-preview.markdown-preview h1,\n.markdown-preview.markdown-preview h2,\n.markdown-preview.markdown-preview h3,\n.markdown-preview.markdown-preview h4,\n.markdown-preview.markdown-preview h5,\n.markdown-preview.markdown-preview h6 {\n  font-weight: bolder;\n  text-decoration-line: underline;\n}\n\n      </style>\n    </head>\n    <body for=\"html-export\">\n      <div class=\"mume markdown-preview  \">\n      <div><h1 class=\"mume-header\" id=\"kernels\">Kernels</h1>\n\n<h2 class=\"mume-header\" id=\"definition\">Definition</h2>\n\n<p>When performing an arithmetic computation on a given image, one approach is to apply said computation in a neighborhood-by-neighborhood manner. This approach is very braodly termed as a <strong>convolution</strong>. In other words, convolution is an operation between every part of an image (&quot;pixel neighborhood&quot;) and an operator (&quot;kernel&quot;)<sup class=\"footnote-ref\"><a href=\"#fn1\" id=\"fnref1\">[1]</a></sup><sup class=\"footnote-ref\"><a href=\"#fn2\" id=\"fnref2\">[2]</a></sup>.</p>\n<p>As the computation slides over each pixel neighborhood, we perform some arithmetic using the kernel, with the kernel typically being represented as a matrix or a fixed size array.</p>\n<p>This kernel describes how the pixels in that neighborhood are combined or transformed to yield a corresponding output.</p>\n<ul>\n<li class=\"task-list-item\">\n<p><input type=\"checkbox\" class=\"task-list-item-checkbox\"> <a href=\"https://www.youtube.com/watch?v=WMmHcrX4Obg\">Watch Kernel Convolution Explained Visually</a></p>\n  <iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/WMmHcrX4Obg\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n</li>\n</ul>\n<h3 class=\"mume-header\" id=\"mathematical-definitions\">Mathematical Definitions</h3>\n\n<p>You will notice from the video that the output image now has a <strong>shape that is smaller</strong> than the original input. Mathematically, the shape of this output would be:</p>\n<p></p><div class=\"mathjax-exps\">$$(\\frac{X_m-M_i}{s_x})+1, (\\frac{X_n-M_j}{s_y})+1$$</div><p></p>\n<p>Where the input matrix has a size of <span class=\"mathjax-exps\">$(X_m, X_n)$</span>, the kernel <span class=\"mathjax-exps\">$M$</span> is of size <span class=\"mathjax-exps\">$(M_i, M_j)$</span>, <span class=\"mathjax-exps\">$s_x$</span> represents the stride over rows while <span class=\"mathjax-exps\">$s_y$</span> represents the stride over columns.</p>\n<p>In the linked video, we are sliding the kernel on both the x- and y- direction by 1 pixel at a time after each computation, giving a value of 1 for <span class=\"mathjax-exps\">$s_x$</span> and <span class=\"mathjax-exps\">$s_y$</span>. The input matrix in our video is of size 5, and our kernel is of size 3x3, giving us an output size of:</p>\n<p></p><div class=\"mathjax-exps\">$$(\\frac{5-3}{1}+1, \\frac{5-3}{1}+1)$$</div><p></p>\n<p>Expressed mathematically, the full procedure as implemented in <code>opencv</code>looks like this for a convolution:</p>\n<p><span class=\"mathjax-exps\">$H(x, y) = \\sum^{M_i-1}_{i=0}\\sum^{M_j-1}_{j=0} I(x+i-a_i, y+j-a_j)K(i,j)$</span></p>\n<p>We&apos;ll see the step-by-step given a kernel represented by matrix M:</p>\n<p></p><div class=\"mathjax-exps\">$$M = \\begin{bmatrix} 1 &amp; 2 &amp; 0 \\\\ -1 &amp; 3 &amp; 0 \\\\ 0 &amp; -1 &amp; 0  \\end{bmatrix}$$</div><p></p>\n<ol>\n<li>\n<p>Place the kernel anchor (in this case, <span class=\"mathjax-exps\">$3$</span>) on top of a determined pixel, with the rest of the kernel overlaying the corresponding local pixels in the image</p>\n<ul>\n<li>Typically the kernel anchor is the <em>central</em> of the kernel</li>\n<li>Typically the &quot;determined pixel&quot; at the first step is the most upperleft region of the image</li>\n</ul>\n</li>\n<li>\n<p>Multiply the kernel coefficients by the corresponding image pixel values and sum the result</p>\n</li>\n<li>\n<p>Replace the value at the location of the <em>anchor</em> in the input image with the result</p>\n</li>\n<li>\n<p>Repeat the process for all pixels by sliding the kernel across the entire image, as specified by the stride</p>\n</li>\n</ol>\n<h4 class=\"mume-header\" id=\"a-note-on-padding\">A Note on Padding</h4>\n\n<p>Keen readers may observe from executing <code>meanblur_02.py</code> that the original dimension of our image is preserved <em>after</em> the convolution. This may seem unexpected given what we know about the formula to derive the output dimension.<br>\nAs it turns out, to preserve the dimension between the input and output images, a common technique known as &quot;padding&quot; is applied. From the documentation itself,</p>\n<blockquote>\n<p>For example, if you want to smooth an image using a Gaussian 3 * 3 filter, then, when processing the left-most pixels in each row, you need pixels to the left of them, that is, outside of the image. You can let these pixels be the same as the left-most image pixels (&#x201C;replicated border&#x201D; extrapolation method), or assume that all the non-existing pixels are zeros (&#x201C;constant border&#x201D; extrapolation method), and so on.</p>\n</blockquote>\n<p>The various border interpolation techniques available in <code>opencv</code> are as below (image boundaries are denoted with &apos;|&apos;):</p>\n<ul>\n<li>BORDER_REPLICATE:\n<ul>\n<li><code>aaaaaa|abcdefgh|hhhhhhh</code></li>\n</ul>\n</li>\n<li>BORDER_REFLECT:\n<ul>\n<li><code>fedcba|abcdefgh|hgfedcb</code></li>\n</ul>\n</li>\n<li>BORDER_REFLECT_101:\n<ul>\n<li><code>gfedcb|abcdefgh|gfedcba</code></li>\n</ul>\n</li>\n<li>BORDER_WRAP:\n<ul>\n<li><code>cdefgh|abcdefgh|abcdefg</code></li>\n</ul>\n</li>\n<li>BORDER_CONSTANT:\n<ul>\n<li><code>iiiiii|abcdefgh|iiiiiii</code>  with some specified &apos;i&apos;</li>\n</ul>\n</li>\n</ul>\n<p>It is useful to remember that OpenCV only supports convolving an image where the dimension of its output matches that of the input, so in almost all cases we need a way to extrapolate an extra layer of pixels around the borders. To specify an extrapolation method, supply the filtering method with an extra argument:</p>\n<ul>\n<li><code>cv2.GaussianBlur(..., borderType=BORDER_CONSTANT)</code></li>\n</ul>\n<p>Given what we&apos;ve just learned, we can rewrite our formula to determine the output dimensions more generally and this time incorporating the padding technique:</p>\n<p></p><div class=\"mathjax-exps\">$$(\\frac{X_m - M_i + 2P_i}{s_x})+1, (\\frac{X_n-M_j + 2P_j}{s_y})+1$$</div><p></p>\n<h5 class=\"mume-header\" id=\"dive-deeper\">Dive Deeper</h5>\n\n<p>Before moving on to the next section, try and think through the following problem:</p>\n<p>In the case on a 333x333 input image, with a strides of 1 using a kernel of size 5*5, what is the amount of zero-padding you should add to the borders of your image such that the output image is also 333x333?</p>\n<ul>\n<li class=\"task-list-item\"><input type=\"checkbox\" class=\"task-list-item-checkbox\"> Done, I&apos;ve understood the convolution operation!</li>\n</ul>\n<h2 class=\"mume-header\" id=\"smoothing-and-blurring\">Smoothing and Blurring</h2>\n\n<p>To fully appreciate the idea of kernel convolutions, we&apos;ll see some real examples. We&apos;ll use the <code>cv2.filter2D</code> to convolve over our image using the following kernel:</p>\n<p></p><div class=\"mathjax-exps\">$$K = \\frac{1}{5\\cdot5} \\begin{bmatrix} 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\\\ 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\\\ 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\\\ 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1  \\\\ 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1  \\end{bmatrix}$$</div><p></p>\n<p>The kernel we specified above is equivalent to a <em>normalized box filter</em> of size 5. Having watched the video earlier, you may intuit that the outcome of such a convolution is that each pixel in the input image is replaced by the average of the 5x5 pixels around it. You are in fact correct. If you are skeptical and would rather see proof of it, we&apos;ll see proof of this in the <a href=\"#code-illustrations-mean-filtering\">Code Illustrations: Mean Filtering</a> section of this coursebook.</p>\n<p>Mathematically, by dividing our matrix by 25 (normalizing) we apply a control that stop our pixel values from being artificially increased since each pixel is now the weighted sum of its neighborhood.</p>\n<blockquote>\n<h4>A Note on Terminology</h4>\n<h5>Kernels or Filters?</h5>\n<p>When all we&apos;ve been talking about is kernels, why is it that we&apos;re using the &quot;filter&quot; terminology in <code>opencv</code> code instead? That depends on the context. In the case of a convolutional neural network, <em>kernel</em> and <em>filters</em> are used interchangably: they both refer to the same thing.<br>\nSome computer vision researchers have proposed to use a stricter definition, prefering to use the term &quot;kernel&quot; for a 2D array of weights, like our matrix above, and the term &quot;filter&quot; for the 3D structure of multiple kernels stacked together<sup class=\"footnote-ref\"><a href=\"#fn3\" id=\"fnref3\">[3]</a></sup>, a concept we&apos;ll explore further in the Convolutional Neural Network part of this course.</p>\n<h5>Correlations vs Convolutions</h5>\n<p>Imaging specialists may point to the fact that <code>opencv</code> does not mirror / flip the kernel around the anchor point and hence doesn&apos;t qualify as a convolution under strict definitions of digital imaging theory. For a pure implementation of a &quot;convolution&quot;, you should instead <code>scipy.ndimage.convolve(src, kernel)</code> instead or use <code>cv2.filter2D</code> in conjunction with a <code>flip</code> on the kernel<sup class=\"footnote-ref\"><a href=\"#fn4\" id=\"fnref4\">[4]</a></sup>. This is in large part owed to the difference in scientific parlance adopted by the various scientific communities, a phenomenon more common than you&apos;d expect. As an additional example, deep learning scientists usings convolutional neural network (CNN) generally refer to a non-flipped kernel when performing convolution.</p>\n</blockquote>\n<h4 class=\"mume-header\" id=\"code-illustrations-mean-filtering\">Code Illustrations: Mean Filtering</h4>\n\n<ol>\n<li><code>meanblur_01.py</code> demonstrates the construction of a 5x5 mean average filter using <code>np.ones((5,5))/25</code>. Because every coefficient is basically the same, this merely replaces the value of each pixel in our input image with the average of the values in its 5x5 neighborhood.</li>\n</ol>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">img <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;assets/canal.png&quot;</span><span class=\"token punctuation\">)</span>\nmean_blur <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>ones<span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token number\">5</span><span class=\"token punctuation\">,</span> <span class=\"token number\">5</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> dtype<span class=\"token operator\">=</span><span class=\"token string\">&quot;float32&quot;</span><span class=\"token punctuation\">)</span> <span class=\"token operator\">*</span> <span class=\"token punctuation\">(</span><span class=\"token number\">1.0</span> <span class=\"token operator\">/</span> <span class=\"token punctuation\">(</span><span class=\"token number\">5</span> <span class=\"token operator\">**</span> <span class=\"token number\">2</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\nsmoothed_col <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>filter2D<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> mean_blur<span class=\"token punctuation\">)</span>\n</pre><p>Alternatively, we can be explicit in our creation of the 5x5 kernel using <code>numpy</code>&apos;s array:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">mean_blur <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>array<span class=\"token punctuation\">(</span>\n<span class=\"token punctuation\">[</span><span class=\"token punctuation\">[</span><span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">[</span><span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">[</span><span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">[</span><span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n    <span class=\"token punctuation\">[</span><span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0.04</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\n</pre><ol start=\"2\">\n<li>\n<p>To be fully convinced that the mean filtering operation is doing what we expect it to do, we can inspect the pixel values before and after the convolution, to verify that the math checks out by hand. We do this in <code>meanblur_02.py</code>.</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">img <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;assets/canal.png&quot;</span><span class=\"token punctuation\">)</span>\ngray <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>cvtColor<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> cv2<span class=\"token punctuation\">.</span>COLOR_BGR2GRAY<span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&apos;Gray: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>gray<span class=\"token punctuation\">[</span><span class=\"token punctuation\">:</span><span class=\"token number\">5</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">:</span><span class=\"token format-spec\">5]</span><span class=\"token punctuation\">}</span></span><span class=\"token string\">&apos;</span></span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># [[ 31  27  21  17  21]</span>\n<span class=\"token comment\"># [ 77  85  86  87  90]</span>\n<span class=\"token comment\"># [205 205 215 227 222]</span>\n<span class=\"token comment\"># [224 230 222 243 249]</span>\n<span class=\"token comment\"># [138 210 206 218 242]]</span>\n<span class=\"token keyword\">for</span> i <span class=\"token keyword\">in</span> <span class=\"token builtin\">range</span><span class=\"token punctuation\">(</span><span class=\"token number\">3</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span>\n    newval <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span><span class=\"token builtin\">round</span><span class=\"token punctuation\">(</span>np<span class=\"token punctuation\">.</span>mean<span class=\"token punctuation\">(</span>gray<span class=\"token punctuation\">[</span><span class=\"token punctuation\">:</span><span class=\"token number\">5</span><span class=\"token punctuation\">,</span> i<span class=\"token punctuation\">:</span>i<span class=\"token operator\">+</span><span class=\"token number\">5</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\n    <span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&apos;Mean of 25x25 pixel #</span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>i<span class=\"token operator\">+</span><span class=\"token number\">1</span><span class=\"token punctuation\">}</span></span><span class=\"token string\">: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">int</span><span class=\"token punctuation\">(</span>newval<span class=\"token punctuation\">)</span><span class=\"token punctuation\">}</span></span><span class=\"token string\">&apos;</span></span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># output:</span>\n<span class=\"token comment\"># Mean of 25x25 pixel #1: 152</span>\n<span class=\"token comment\"># Mean of 25x25 pixel #2: 158</span>\n<span class=\"token comment\"># Mean of 25x25 pixel #3: 160</span>\n</pre><p>The code above shows that the output of such a convolution operation beginning at the top-left region of the image would be 152. As we slide along the horizontal direction and re-compute the mean of the neighborhood, we get 158. As we slide our kernel along the horizontal direction for a second time and re-compute the mean of the neighborhood we obtain the value of 160.</p>\n<p>If you prefer you can verify these values by hand, using the raw pixel values from <code>gray[:5, :5]</code> (5x5 top-left region of the image).</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">mean_blur <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>ones<span class=\"token punctuation\">(</span>KERNEL_SIZE<span class=\"token punctuation\">,</span> dtype<span class=\"token operator\">=</span><span class=\"token string\">&quot;float32&quot;</span><span class=\"token punctuation\">)</span> <span class=\"token operator\">*</span> <span class=\"token punctuation\">(</span><span class=\"token number\">1.0</span> <span class=\"token operator\">/</span> <span class=\"token punctuation\">(</span><span class=\"token number\">5</span> <span class=\"token operator\">**</span> <span class=\"token number\">2</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\nsmoothed_gray <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>filter2D<span class=\"token punctuation\">(</span>gray<span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> mean_blur<span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&apos;Smoothed: </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>smoothed_gray<span class=\"token punctuation\">[</span><span class=\"token punctuation\">:</span><span class=\"token number\">5</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">:</span><span class=\"token format-spec\">5]</span><span class=\"token punctuation\">}</span></span><span class=\"token string\">&apos;</span></span><span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># output:</span>\n<span class=\"token comment\"># [[122 123 125 127 128]</span>\n<span class=\"token comment\"># [126 127 128 131 132]</span>\n<span class=\"token comment\"># [148 149 152 158 160]</span>\n<span class=\"token comment\"># [177 179 184 196 202]</span>\n<span class=\"token comment\"># [197 199 204 222 229]]</span>\n</pre><p>Notice that from the output of our mean-filter, the first anchor (center of the neighborhood) has transformed from 215 to 152, and the one to the right of it has transformed from 227 to 158, and so on. The math does work out and you can observe the blur effect directly by running <code>meanblur02.py</code>.</p>\n</li>\n<li>\n<p>As it turns out, <code>opencv</code> provides a set of convenience functions to apply filtering onto our images. All the three approaches below yield the same output, as can be verified from the output pixel values after executing <code>meanblur_03.py</code>:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token comment\"># approach 1</span>\nmean_blur <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>ones<span class=\"token punctuation\">(</span>KERNEL_SIZE<span class=\"token punctuation\">,</span> dtype<span class=\"token operator\">=</span><span class=\"token string\">&quot;float32&quot;</span><span class=\"token punctuation\">)</span> <span class=\"token operator\">*</span> <span class=\"token punctuation\">(</span><span class=\"token number\">1.0</span> <span class=\"token operator\">/</span> <span class=\"token punctuation\">(</span><span class=\"token number\">5</span> <span class=\"token operator\">**</span> <span class=\"token number\">2</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\nsmoothed_gray <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>filter2D<span class=\"token punctuation\">(</span>gray<span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> mean_blur<span class=\"token punctuation\">)</span> \n\n<span class=\"token comment\"># approach 2</span>\nsmoothed_gray <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>blur<span class=\"token punctuation\">(</span>gray<span class=\"token punctuation\">,</span> KERNEL_SIZE<span class=\"token punctuation\">)</span>\n\n<span class=\"token comment\"># approach 3</span>\nsmoothed_gray <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>boxFilter<span class=\"token punctuation\">(</span>gray<span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> KERNEL_SIZE<span class=\"token punctuation\">)</span>\n</pre></li>\n</ol>\n<p>There are several types of kernels we can apply to achieve a blur filter on our image. The averaging filter method serves as a good introductory point because it is easy to intuit about, but it is good to know that <code>opencv</code> provides a collection of convenience functions, each being an implementation of some blurring filter. See <a href=\"#handy-kernels-for-image-processing\">Handy kernels for image processing</a> for a list of smoothing kernels implemented in <code>opencv</code>.</p>\n<h2 class=\"mume-header\" id=\"role-in-convolutional-neural-networks\">Role in Convolutional Neural Networks</h2>\n\n<p>Earlier, it was said that kernels play a play integral role in all modern convolutional neural networks architecture. Using TensorFlow, one will rely on the <code>tf.nn.conv2d</code> function to perform a 2D convolution. The syntax looks like this:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">tf<span class=\"token punctuation\">.</span>nn<span class=\"token punctuation\">.</span>conv2d<span class=\"token punctuation\">(</span>\n    <span class=\"token builtin\">input</span><span class=\"token punctuation\">,</span>\n    <span class=\"token builtin\">filter</span><span class=\"token punctuation\">,</span>\n    strides<span class=\"token punctuation\">,</span>\n    padding<span class=\"token punctuation\">,</span>\n    use_cudnn_on_gpu<span class=\"token operator\">=</span><span class=\"token boolean\">None</span><span class=\"token punctuation\">,</span>\n    data_format<span class=\"token operator\">=</span><span class=\"token boolean\">None</span><span class=\"token punctuation\">,</span>\n    name<span class=\"token operator\">=</span><span class=\"token boolean\">None</span>   \n<span class=\"token punctuation\">)</span>\n</pre><p>Where:</p>\n<ul>\n<li><code>input</code> is assumed to be a tensor of shape <code>(batch, height, width, channels)</code> where <code>batch</code> is the number of images in a minibatch</li>\n<li><code>filter</code> is a tensor of shape <code>(filter_height, filter_width, channels, out_channels)</code> that specifies the learnable weights for the nonlinear transformation learned in the convoliutional kernel</li>\n<li><code>strides</code> contains the filter strides and is a list of length 4 (one for each input dimension)</li>\n<li><code>padding</code> determines whether the input tensors are padded (with extra zeros) to guarantee the output <em>from the convolutional layer</em> has the same shape as the input. <code>padding=&quot;SAME&quot;</code> adds padding to the input and <code>padding=&quot;VALID&quot;</code> results in no padding</li>\n</ul>\n<p>Worthy to note is that the <code>input</code> and <code>filters</code> parameters follow what we&apos;ve implemented using <code>opencv</code> thus far. When we&apos;re applying a filter like the mean blur example earlier, we slide our kernel along the <code>stride</code> of 1. In TensorFlow code, we would have set <code>strides=[1,1,1,1]</code> such that the kernel would slide by 1 unit across all 4 dimensions (x, y, channel, and image index).</p>\n<p>Example of a Convolutional Neural Network architecture<sup class=\"footnote-ref\"><a href=\"#fn5\" id=\"fnref5\">[5]</a></sup>:<br>\n<img src=\"assets/c6archit.png\" alt></p>\n<p>Notice from the image that the dimension of our output from the first convolution layer is smaller (28x28) than its input (32x32) when we perform the operation without padding. <code>C1</code> and <code>C3</code> are examples of this in the above illustration.</p>\n<p>In <code>S1</code> and <code>S2</code>, we&apos;re applying a max-pooling filter to down-sample our image representation, allowing our network to learn the parameters from the higher-order representations in each region of the image. An example operation is depicted below:</p>\n<p><img src=\"assets/c6pooling.png\" alt></p>\n<h2 class=\"mume-header\" id=\"handy-kernels-for-image-processing\">Handy Kernels for Image Processing</h2>\n\n<ul>\n<li>Averaging Filter: <code>cv2.blur(img, KERNEL_SIZE)</code>\n<ul>\n<li>As seen in <code>meanblur_03.py</code>, replace each pixel with the <strong>mean</strong> of its neighboring pixels</li>\n</ul>\n</li>\n<li>Median Filter: <code>cv2.medianBlur(img, KERNEL_SIZE)</code>\n<ul>\n<li>Replace each pixel with the <strong>median</strong> of its neighboring pixels</li>\n</ul>\n</li>\n<li>Gaussian Filter: <code>cv2.GaussianBlur(img, KERNEL_SIZE, 0)</code></li>\n<li>Bilateral Filter: <code>cv2.bilateralFilter(img, d, sigmaColor, sigmaSpace)</code>\n<ul>\n<li>An edge-preserving smoothing that aims to keep edges sharp</li>\n</ul>\n</li>\n</ul>\n<h4 class=\"mume-header\" id=\"gaussian-filtering\">Gaussian Filtering</h4>\n\n<p>Gaussian filter deserves its own section given its prevalence in image processing, and is achieved by convolving each point in the input array (read: each pixel in our image) with a <em>Gaussian kernel</em> and take the sum of them to produce the output array.</p>\n<p>If you remember your lessons from statistics, you may recall a 1D gaussian distribution looks like this:<br>\n<img src=\"assets/normaldist.png\" style=\"width: 50%; margin-left:20%;\"></p>\n<p>For completeness&apos; sake, the code to graph the distribution above is in <code>utils/gaussiancurve.r</code>.</p>\n<p>For a 1-dimensional image, the pixel located in the middle would be assigned the largest weight, with the weight of its neighbours decreasing as the spatial distance between them and the center pixel increases.</p>\n<p>For the mathematically inclined, the graphed distribution above is generated from the Gaussian function<sup class=\"footnote-ref\"><a href=\"#fn6\" id=\"fnref6\">[6]</a></sup>:</p>\n<p></p><div class=\"mathjax-exps\">$$g(x) = e^{\\frac{-x^2}{2\\sigma^2}}$$</div><p></p>\n<p>Where <span class=\"mathjax-exps\">$x$</span> is the spatial distance between the center pixel and the corresponding neighbor unit.</p>\n<p>For a 1D kernel of size 7, each pixel would therefore be weighted accordingly:</p>\n<p></p><div class=\"mathjax-exps\">$$g(x) = \\begin{bmatrix}.011 &amp; .13 &amp; .6 &amp; 1 &amp; .6 &amp; .13 &amp; .011\\end{bmatrix}$$</div><p></p>\n<p>The above should not be hard to intuit about, as if we refer back to the graphed distribution we can see that the center pixel (at position x=0) the <span class=\"mathjax-exps\">$g(x)$</span> would evaluate to a value of <span class=\"mathjax-exps\">$1$</span>.</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token keyword\">import</span> numpy <span class=\"token keyword\">as</span> np\nweights <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span><span class=\"token punctuation\">]</span>\nsd <span class=\"token operator\">=</span> <span class=\"token number\">1</span>\n<span class=\"token keyword\">for</span> i <span class=\"token keyword\">in</span> <span class=\"token builtin\">range</span><span class=\"token punctuation\">(</span><span class=\"token number\">4</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span>\n    weights <span class=\"token operator\">+=</span> <span class=\"token punctuation\">[</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">round</span><span class=\"token punctuation\">(</span>np<span class=\"token punctuation\">.</span>exp<span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token operator\">-</span>i<span class=\"token operator\">**</span><span class=\"token number\">2</span><span class=\"token punctuation\">)</span><span class=\"token operator\">/</span><span class=\"token punctuation\">(</span><span class=\"token number\">2</span><span class=\"token operator\">*</span>sd<span class=\"token operator\">**</span><span class=\"token number\">2</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span><span class=\"token number\">3</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">]</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span>weights<span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># output:</span>\n<span class=\"token comment\"># [1.0, 0.607, 0.135, 0.011]</span>\n</pre><p>For a 2D kernel, the formula would take the form of:<br>\n</p><div class=\"mathjax-exps\">$$g(x,y) = e^{\\frac{-(x^2+y^2)}{2\\sigma^2}}$$</div><p></p>\n<p>When we compare the output of a mean filter to a gaussian filter, as in the example script in <code>gaussianblur_01.py</code>, we can then observe the difference in output visually:</p>\n<p><img src=\"assets/meanvsgaussian.png\" alt></p>\n<p>This should also come as little surprise, since the mean filter just replace each pixels with the average values of its neighboring pixels, essentially giving a coefficient of 1 (without normalized) to a grid of 5x5 pixels.</p>\n<p>Where on the other hand, gaussian filters <strong>weigh pixels using a gaussian distribution</strong> (think: bell curve in a 2d space) around the center pixel such that farther pixels are given a lower coefficient than nearer ones.</p>\n<h4 class=\"mume-header\" id=\"sharpening-kernels\">Sharpening Kernels</h4>\n\n<p>The opposite of blurring would be sharpening. There are again several approaches to this, and we&apos;ll start by looking at specifically two of them.</p>\n<p>The first approach relies on the familiar <code>cv2.filter2D()</code> function to perform the following kernel and is implemented in <code>sharpening_01.py</code>:<br>\n</p><div class=\"mathjax-exps\">$$K = \\begin{bmatrix} -1 &amp; -1 &amp; -1 \\\\ -1 &amp; 9 &amp; -1 \\\\ -1 &amp; -1 &amp; -1 \\end{bmatrix}$$</div><p></p>\n<p>The outcome:<br>\n<img src=\"assets/sharpen.png\" alt></p>\n<h5 class=\"mume-header\" id=\"approximate-gaussian-kernel-for-sharpening\">Approximate Gaussian Kernel for Sharpening</h5>\n\n<p>We can apply the same principles behind a Gaussian kernel for sharpening operations (as opposed to blurring). The full script is in <code>sharpening_02.py</code> but the essential parts are as follow:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">approx_gaussian <span class=\"token operator\">=</span> <span class=\"token punctuation\">(</span>\n    np<span class=\"token punctuation\">.</span>array<span class=\"token punctuation\">(</span>\n        <span class=\"token punctuation\">[</span>\n            <span class=\"token punctuation\">[</span><span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n            <span class=\"token punctuation\">[</span><span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n            <span class=\"token punctuation\">[</span><span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token number\">8</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n            <span class=\"token punctuation\">[</span><span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n            <span class=\"token punctuation\">[</span><span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span>\n        <span class=\"token punctuation\">]</span>\n    <span class=\"token punctuation\">)</span><span class=\"token operator\">/</span> <span class=\"token number\">8.0</span>\n<span class=\"token punctuation\">)</span>\nsharpen_col <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>filter2D<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> approx_gaussian<span class=\"token punctuation\">)</span>\n</pre><p>Notice how this method uses an approximate Gaussian kernel and that the result is an overall more natural smoothing:<br>\n<img src=\"assets/gaussiansharpen.png\" alt></p>\n<h5 class=\"mume-header\" id=\"unsharp-masking\">Unsharp Masking</h5>\n\n<p>The second approach is known as &quot;unsharp masking&quot;, derived from that fact that the technique uses a blurred, or &quot;unsharp&quot;, negative image to create a mask of the original image<sup class=\"footnote-ref\"><a href=\"#fn7\" id=\"fnref7\">[7]</a></sup>. This technique is one of the oldest tool in photographic processing (tracing back to 1930s) and popular tools such as Adobe Photoshop and GIMP have direct implementations of it named, appropriately, Unsharp Mask.</p>\n<p>Lifted straight from the Wikipedia article itself, a &quot;typical blending formula for unsharp masking is <strong>sharpened = original + (original - blurred) * amount</strong>&quot;. <strong>Amount</strong> represents how much contrast is added to the edges.</p>\n<p>To rewrite the formula, we get:<br>\n</p><div class=\"mathjax-exps\">$$\\begin{aligned} Sharpened &amp; = O + (O-B) \\cdot a \\\\ &amp; = O + Oa - Ba \\\\ &amp; = O (1+a) + B(-a)\\end{aligned}$$</div><p></p>\n<p>Where <span class=\"mathjax-exps\">$a$</span> is the amount, <span class=\"mathjax-exps\">$B$</span> is the blurred image (mask) and <span class=\"mathjax-exps\">$O$</span> is the original image. The final form is convenient because we can plug it into <code>cv2.addWeighted</code> and get an output. From OpenCV&apos;s documentation, the function <code>addWeighted</code> calculates the weighted sum of two arrays as follows:<br>\n</p><div class=\"mathjax-exps\">$$dst(I) = saturate(src1(I) * alpha + src2(I) * beta + gamma)$$</div><p></p>\n<p>When you perform the arithmetic above, you will find that the values (eg. <code>src1(I) * alpha</code> when alpha is &gt; 1.5 will produce values greater than 255) may fall outside the range of 0 and 255. Saturation clips the value in a way that is synonymous to the following:</p>\n<p></p><div class=\"mathjax-exps\">$$Saturate(x) = min(max(round(r), 0), 255)$$</div><p></p>\n<p>The following code demonstrates the unsharp masking technique:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">img <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;assets/sarpi.png&quot;</span><span class=\"token punctuation\">)</span>\n\namt <span class=\"token operator\">=</span> <span class=\"token number\">1.5</span>\nblurred <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>GaussianBlur<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span><span class=\"token number\">5</span><span class=\"token punctuation\">,</span><span class=\"token number\">5</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token number\">10</span><span class=\"token punctuation\">)</span>\nunsharp <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>addWeighted<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token operator\">+</span>amt<span class=\"token punctuation\">,</span> blurred<span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span>amt<span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">)</span>\nunsharp_manual <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>clip<span class=\"token punctuation\">(</span>img <span class=\"token operator\">*</span> <span class=\"token punctuation\">(</span><span class=\"token number\">1</span><span class=\"token operator\">+</span>amt<span class=\"token punctuation\">)</span> <span class=\"token operator\">+</span> blurred <span class=\"token operator\">*</span> <span class=\"token punctuation\">(</span><span class=\"token operator\">-</span>amt<span class=\"token punctuation\">)</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">255</span><span class=\"token punctuation\">)</span>\ncv2<span class=\"token punctuation\">.</span>imshow<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Unsharp Masking&quot;</span><span class=\"token punctuation\">,</span> unsharp<span class=\"token punctuation\">)</span>\n</pre><p><img src=\"assets/unsharpsarpi.png\" alt><br>\nYou can find the sample code for this in <code>unsharpmask_01.py</code> (using <code>addWeighted</code>) and in <code>unsharpmask_02.py</code> (manual calculation) respectively.</p>\n<h2 class=\"mume-header\" id=\"summary-and-key-points\">Summary and Key Points</h2>\n\n<p>Why go to such lengths on the mathematical ideas behind image filtering operations?</p>\n<blockquote>\n<p>Filtering is perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term &quot;filtering&quot;, the value of the filtered image at a given location is a function of the values of the input image in a small neighborhood of the same location.<sup class=\"footnote-ref\"><a href=\"#fn8\" id=\"fnref8\">[8]</a></sup></p>\n</blockquote>\n<p>It is fundamental to a host of common image processing techniques, from enhancements (sharpening, denoise, increase / reduce contrast), to edge detection, and texture detection, and in the case of deep learning, feature detections.</p>\n<p>To help with your recall, I made a simple illustration below:</p>\n<p><img src=\"assets/gaussiankernel.png\" alt></p>\n<p>Whenever you&apos;re ready, move on to <code>edgedetect.md</code> to learn the essentials of edge detection using kernel operations.</p>\n<h2 class=\"mume-header\" id=\"references\">References</h2>\n\n<hr class=\"footnotes-sep\">\n<section class=\"footnotes\">\n<ol class=\"footnotes-list\">\n<li id=\"fn1\" class=\"footnote-item\"><p>Making your own linear filters, <a href=\"https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/filter_2d/filter_2d.html\">OpenCV Documentation</a> <a href=\"#fnref1\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn2\" class=\"footnote-item\"><p>Bradski, Kaehler, Learning OpenCV <a href=\"#fnref2\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn3\" class=\"footnote-item\"><p>Stacks Exchange, <a href=\"https://stats.stackexchange.com/a/366940\">https://stats.stackexchange.com/a/366940</a> <a href=\"#fnref3\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn4\" class=\"footnote-item\"><p><a href=\"http://docs.opencv.org/modules/imgproc/doc/filtering.html#filter2d\">OpenCV Documentation</a> <a href=\"#fnref4\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn5\" class=\"footnote-item\"><p>R.Zadeh and B.Ramsundar, TensorFlow for Deep Learning, O&apos;Reilly Media <a href=\"#fnref5\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn6\" class=\"footnote-item\"><p>Wikipedia, Gaussian function, <a href=\"https://en.wikipedia.org/wiki/Gaussian_function\">https://en.wikipedia.org/wiki/Gaussian_function</a> <a href=\"#fnref6\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn7\" class=\"footnote-item\"><p>W.Fulton, A few scanning tips, Sharpening - Unsharp Mask <a href=\"#fnref7\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn8\" class=\"footnote-item\"><p>C. Tomasi and R. Manduchi, &quot;Bilateral Filtering for Gray and Color Images&quot;, Proceedings of the 1998 IEEE International Conference on Computer Vision, Bombay, India. <a href=\"#fnref8\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n</ol>\n</section>\n</div>\n      </div>\n      <div class=\"md-sidebar-toc\"><ul>\n<li><a href=\"#kernels\">Kernels</a>\n<ul>\n<li><a href=\"#definition\">Definition</a>\n<ul>\n<li><a href=\"#mathematical-definitions\">Mathematical Definitions</a>\n<ul>\n<li><a href=\"#a-note-on-padding\">A Note on Padding</a>\n<ul>\n<li><a href=\"#dive-deeper\">Dive Deeper</a></li>\n</ul>\n</li>\n</ul>\n</li>\n</ul>\n</li>\n<li><a href=\"#smoothing-and-blurring\">Smoothing and Blurring</a>\n<ul>\n<li><a href=\"#code-illustrations-mean-filtering\">Code Illustrations: Mean Filtering</a></li>\n</ul>\n</li>\n<li><a href=\"#role-in-convolutional-neural-networks\">Role in Convolutional Neural Networks</a></li>\n<li><a href=\"#handy-kernels-for-image-processing\">Handy Kernels for Image Processing</a>\n<ul>\n<li><a href=\"#gaussian-filtering\">Gaussian Filtering</a></li>\n<li><a href=\"#sharpening-kernels\">Sharpening Kernels</a>\n<ul>\n<li><a href=\"#approximate-gaussian-kernel-for-sharpening\">Approximate Gaussian Kernel for Sharpening</a></li>\n<li><a href=\"#unsharp-masking\">Unsharp Masking</a></li>\n</ul>\n</li>\n</ul>\n</li>\n<li><a href=\"#summary-and-key-points\">Summary and Key Points</a></li>\n<li><a href=\"#references\">References</a></li>\n</ul>\n</li>\n</ul>\n</div>\n      <a id=\"sidebar-toc-btn\">&#x2261;</a>\n    \n    \n    \n    \n    \n    \n    \n    \n<script>\n\nvar sidebarTOCBtn = document.getElementById('sidebar-toc-btn')\nsidebarTOCBtn.addEventListener('click', function(event) {\n  event.stopPropagation()\n  if (document.body.hasAttribute('html-show-sidebar-toc')) {\n    document.body.removeAttribute('html-show-sidebar-toc')\n  } else {\n    document.body.setAttribute('html-show-sidebar-toc', true)\n  }\n})\n</script>\n      \n  \n    </body></html>"
  },
  {
    "path": "edgedetect/kernel.md",
    "content": "# Kernels\n## Definition\nWhen performing an arithmetic computation on a given image, one approach is to apply said computation in a neighborhood-by-neighborhood manner. This approach is very braodly termed as a **convolution**. In other words, convolution is an operation between every part of an image (\"pixel neighborhood\") and an operator (\"kernel\")[^1][^2].\n\nAs the computation slides over each pixel neighborhood, we perform some arithmetic using the kernel, with the kernel typically being represented as a matrix or a fixed size array. \n\nThis kernel describes how the pixels in that neighborhood are combined or transformed to yield a corresponding output.\n\n- [ ] [Watch Kernel Convolution Explained Visually](https://www.youtube.com/watch?v=WMmHcrX4Obg)\n\n    <iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/WMmHcrX4Obg\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n\n### Mathematical Definitions\nYou will notice from the video that the output image now has a **shape that is smaller** than the original input. Mathematically, the shape of this output would be:\n\n$$(\\frac{X_m-M_i}{s_x})+1, (\\frac{X_n-M_j}{s_y})+1$$\n\nWhere the input matrix has a size of $(X_m, X_n)$, the kernel $M$ is of size $(M_i, M_j)$, $s_x$ represents the stride over rows while $s_y$ represents the stride over columns. \n\nIn the linked video, we are sliding the kernel on both the x- and y- direction by 1 pixel at a time after each computation, giving a value of 1 for $s_x$ and $s_y$. The input matrix in our video is of size 5, and our kernel is of size 3x3, giving us an output size of:\n\n$$(\\frac{5-3}{1}+1, \\frac{5-3}{1}+1)$$\n\nExpressed mathematically, the full procedure as implemented in `opencv`looks like this for a convolution:\n\n$H(x, y) = \\sum^{M_i-1}_{i=0}\\sum^{M_j-1}_{j=0} I(x+i-a_i, y+j-a_j)K(i,j)$\n\nWe'll see the step-by-step given a kernel represented by matrix M:\n\n$$M = \\begin{bmatrix} 1 & 2 & 0 \\\\ -1 & 3 & 0 \\\\ 0 & -1 & 0  \\end{bmatrix}$$\n\n1. Place the kernel anchor (in this case, $3$) on top of a determined pixel, with the rest of the kernel overlaying the corresponding local pixels in the image\n    - Typically the kernel anchor is the _central_ of the kernel\n    - Typically the \"determined pixel\" at the first step is the most upperleft region of the image\n\n2. Multiply the kernel coefficients by the corresponding image pixel values and sum the result  \n\n3. Replace the value at the location of the _anchor_ in the input image with the result\n\n4. Repeat the process for all pixels by sliding the kernel across the entire image, as specified by the stride\n\n#### A Note on Padding\nKeen readers may observe from executing `meanblur_02.py` that the original dimension of our image is preserved _after_ the convolution. This may seem unexpected given what we know about the formula to derive the output dimension. \nAs it turns out, to preserve the dimension between the input and output images, a common technique known as \"padding\" is applied. From the documentation itself, \n> For example, if you want to smooth an image using a Gaussian 3 * 3 filter, then, when processing the left-most pixels in each row, you need pixels to the left of them, that is, outside of the image. You can let these pixels be the same as the left-most image pixels (“replicated border” extrapolation method), or assume that all the non-existing pixels are zeros (“constant border” extrapolation method), and so on. \n\nThe various border interpolation techniques available in `opencv` are as below (image boundaries are denoted with '|'):\n\n - BORDER_REPLICATE:\n    - `aaaaaa|abcdefgh|hhhhhhh`\n - BORDER_REFLECT:\n    - `fedcba|abcdefgh|hgfedcb`\n - BORDER_REFLECT_101:\n    - `gfedcb|abcdefgh|gfedcba`\n - BORDER_WRAP:\n    - `cdefgh|abcdefgh|abcdefg`\n - BORDER_CONSTANT:\n    - `iiiiii|abcdefgh|iiiiiii`  with some specified 'i'\n \nIt is useful to remember that OpenCV only supports convolving an image where the dimension of its output matches that of the input, so in almost all cases we need a way to extrapolate an extra layer of pixels around the borders. To specify an extrapolation method, supply the filtering method with an extra argument:\n - `cv2.GaussianBlur(..., borderType=BORDER_CONSTANT)`\n\n Given what we've just learned, we can rewrite our formula to determine the output dimensions more generally and this time incorporating the padding technique:\n\n $$(\\frac{X_m - M_i + 2P_i}{s_x})+1, (\\frac{X_n-M_j + 2P_j}{s_y})+1$$\n\n##### Dive Deeper\n Before moving on to the next section, try and think through the following problem:\n\n In the case on a 333x333 input image, with a strides of 1 using a kernel of size 5*5, what is the amount of zero-padding you should add to the borders of your image such that the output image is also 333x333?\n\n- [ ] Done, I've understood the convolution operation!\n\n## Smoothing and Blurring\nTo fully appreciate the idea of kernel convolutions, we'll see some real examples. We'll use the `cv2.filter2D` to convolve over our image using the following kernel:\n\n$$K = \\frac{1}{5\\cdot5} \\begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\\\ 1 & 1 & 1 & 1 & 1 \\\\ 1 & 1 & 1 & 1 & 1 \\\\ 1 & 1 & 1 & 1 & 1  \\\\ 1 & 1 & 1 & 1 & 1  \\end{bmatrix}$$\n\nThe kernel we specified above is equivalent to a _normalized box filter_ of size 5. Having watched the video earlier, you may intuit that the outcome of such a convolution is that each pixel in the input image is replaced by the average of the 5x5 pixels around it. You are in fact correct. If you are skeptical and would rather see proof of it, we'll see proof of this in the [Code Illustrations: Mean Filtering](#code-illustrations-mean-filtering) section of this coursebook.\n\nMathematically, by dividing our matrix by 25 (normalizing) we apply a control that stop our pixel values from being artificially increased since each pixel is now the weighted sum of its neighborhood.\n\n> #### A Note on Terminology\n> ##### Kernels or Filters?\n> When all we've been talking about is kernels, why is it that we're using the \"filter\" terminology in `opencv` code instead? That depends on the context. In the case of a convolutional neural network, _kernel_ and _filters_ are used interchangably: they both refer to the same thing.\n> Some computer vision researchers have proposed to use a stricter definition, prefering to use the term \"kernel\" for a 2D array of weights, like our matrix above, and the term \"filter\" for the 3D structure of multiple kernels stacked together[^3], a concept we'll explore further in the Convolutional Neural Network part of this course.\n> \n> ##### Correlations vs Convolutions\n> Imaging specialists may point to the fact that `opencv` does not mirror / flip the kernel around the anchor point and hence doesn't qualify as a convolution under strict definitions of digital imaging theory. For a pure implementation of a \"convolution\", you should instead `scipy.ndimage.convolve(src, kernel)` instead or use `cv2.filter2D` in conjunction with a `flip` on the kernel[^4]. This is in large part owed to the difference in scientific parlance adopted by the various scientific communities, a phenomenon more common than you'd expect. As an additional example, deep learning scientists usings convolutional neural network (CNN) generally refer to a non-flipped kernel when performing convolution.\n\n#### Code Illustrations: Mean Filtering\n1. `meanblur_01.py` demonstrates the construction of a 5x5 mean average filter using `np.ones((5,5))/25`. Because every coefficient is basically the same, this merely replaces the value of each pixel in our input image with the average of the values in its 5x5 neighborhood. \n\n```py\nimg = cv2.imread(\"assets/canal.png\")\nmean_blur = np.ones((5, 5), dtype=\"float32\") * (1.0 / (5 ** 2))\nsmoothed_col = cv2.filter2D(img, -1, mean_blur)\n```\n\nAlternatively, we can be explicit in our creation of the 5x5 kernel using `numpy`'s array:\n```py\nmean_blur = np.array(\n[[0.04, 0.04, 0.04, 0.04, 0.04],\n    [0.04, 0.04, 0.04, 0.04, 0.04],\n    [0.04, 0.04, 0.04, 0.04, 0.04],\n    [0.04, 0.04, 0.04, 0.04, 0.04],\n    [0.04, 0.04, 0.04, 0.04, 0.04]])\n```\n\n2. To be fully convinced that the mean filtering operation is doing what we expect it to do, we can inspect the pixel values before and after the convolution, to verify that the math checks out by hand. We do this in `meanblur_02.py`.\n\n    ```py\n    img = cv2.imread(\"assets/canal.png\")\n    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n    print(f'Gray: {gray[:5, :5]}')\n    # [[ 31  27  21  17  21]\n    # [ 77  85  86  87  90]\n    # [205 205 215 227 222]\n    # [224 230 222 243 249]\n    # [138 210 206 218 242]]\n    for i in range(3):\n        newval = np.round(np.mean(gray[:5, i:i+5]))\n        print(f'Mean of 25x25 pixel #{i+1}: {np.int(newval)}')\n    # output:\n    # Mean of 25x25 pixel #1: 152\n    # Mean of 25x25 pixel #2: 158\n    # Mean of 25x25 pixel #3: 160\n    ```\n    The code above shows that the output of such a convolution operation beginning at the top-left region of the image would be 152. As we slide along the horizontal direction and re-compute the mean of the neighborhood, we get 158. As we slide our kernel along the horizontal direction for a second time and re-compute the mean of the neighborhood we obtain the value of 160. \n    \n    If you prefer you can verify these values by hand, using the raw pixel values from `gray[:5, :5]` (5x5 top-left region of the image).\n\n    ```py\n    mean_blur = np.ones(KERNEL_SIZE, dtype=\"float32\") * (1.0 / (5 ** 2))\n    smoothed_gray = cv2.filter2D(gray, -1, mean_blur)\n    print(f'Smoothed: {smoothed_gray[:5, :5]}')\n    # output:\n    # [[122 123 125 127 128]\n    # [126 127 128 131 132]\n    # [148 149 152 158 160]\n    # [177 179 184 196 202]\n    # [197 199 204 222 229]]\n    ```\n    Notice that from the output of our mean-filter, the first anchor (center of the neighborhood) has transformed from 215 to 152, and the one to the right of it has transformed from 227 to 158, and so on. The math does work out and you can observe the blur effect directly by running `meanblur02.py`.\n\n3. As it turns out, `opencv` provides a set of convenience functions to apply filtering onto our images. All the three approaches below yield the same output, as can be verified from the output pixel values after executing `meanblur_03.py`:\n\n    ```py\n    # approach 1\n    mean_blur = np.ones(KERNEL_SIZE, dtype=\"float32\") * (1.0 / (5 ** 2))\n    smoothed_gray = cv2.filter2D(gray, -1, mean_blur) \n\n    # approach 2\n    smoothed_gray = cv2.blur(gray, KERNEL_SIZE)\n    \n    # approach 3\n    smoothed_gray = cv2.boxFilter(gray, -1, KERNEL_SIZE)\n    ```\n\nThere are several types of kernels we can apply to achieve a blur filter on our image. The averaging filter method serves as a good introductory point because it is easy to intuit about, but it is good to know that `opencv` provides a collection of convenience functions, each being an implementation of some blurring filter. See [Handy kernels for image processing](#handy-kernels-for-image-processing) for a list of smoothing kernels implemented in `opencv`.\n\n## Role in Convolutional Neural Networks\nEarlier, it was said that kernels play a play integral role in all modern convolutional neural networks architecture. Using TensorFlow, one will rely on the `tf.nn.conv2d` function to perform a 2D convolution. The syntax looks like this:\n```py\ntf.nn.conv2d(\n    input,\n    filter,\n    strides,\n    padding,\n    use_cudnn_on_gpu=None,\n    data_format=None,\n    name=None   \n)\n```\n\nWhere:\n- `input` is assumed to be a tensor of shape `(batch, height, width, channels)` where `batch` is the number of images in a minibatch  \n- `filter` is a tensor of shape `(filter_height, filter_width, channels, out_channels)` that specifies the learnable weights for the nonlinear transformation learned in the convoliutional kernel  \n- `strides` contains the filter strides and is a list of length 4 (one for each input dimension)  \n- `padding` determines whether the input tensors are padded (with extra zeros) to guarantee the output _from the convolutional layer_ has the same shape as the input. `padding=\"SAME\"` adds padding to the input and `padding=\"VALID\"` results in no padding\n\nWorthy to note is that the `input` and `filters` parameters follow what we've implemented using `opencv` thus far. When we're applying a filter like the mean blur example earlier, we slide our kernel along the `stride` of 1. In TensorFlow code, we would have set `strides=[1,1,1,1]` such that the kernel would slide by 1 unit across all 4 dimensions (x, y, channel, and image index).\n\nExample of a Convolutional Neural Network architecture[^5]:\n![](assets/c6archit.png)\n\nNotice from the image that the dimension of our output from the first convolution layer is smaller (28x28) than its input (32x32) when we perform the operation without padding. `C1` and `C3` are examples of this in the above illustration.\n\nIn `S1` and `S2`, we're applying a max-pooling filter to down-sample our image representation, allowing our network to learn the parameters from the higher-order representations in each region of the image. An example operation is depicted below:\n \n![](assets/c6pooling.png)\n\n## Handy Kernels for Image Processing\n- Averaging Filter: `cv2.blur(img, KERNEL_SIZE)`  \n    - As seen in `meanblur_03.py`, replace each pixel with the **mean** of its neighboring pixels\n- Median Filter: `cv2.medianBlur(img, KERNEL_SIZE)`\n    - Replace each pixel with the **median** of its neighboring pixels\n- Gaussian Filter: `cv2.GaussianBlur(img, KERNEL_SIZE, 0)`\n- Bilateral Filter: `cv2.bilateralFilter(img, d, sigmaColor, sigmaSpace)`\n    - An edge-preserving smoothing that aims to keep edges sharp\n\n\n#### Gaussian Filtering \nGaussian filter deserves its own section given its prevalence in image processing, and is achieved by convolving each point in the input array (read: each pixel in our image) with a _Gaussian kernel_ and take the sum of them to produce the output array.\n\nIf you remember your lessons from statistics, you may recall a 1D gaussian distribution looks like this:\n<img src=\"assets/normaldist.png\" style=\"width: 50%; margin-left:20%;\">\n\nFor completeness' sake, the code to graph the distribution above is in `utils/gaussiancurve.r`.\n\nFor a 1-dimensional image, the pixel located in the middle would be assigned the largest weight, with the weight of its neighbours decreasing as the spatial distance between them and the center pixel increases. \n\nFor the mathematically inclined, the graphed distribution above is generated from the Gaussian function[^6]:\n\n$$g(x) = e^{\\frac{-x^2}{2\\sigma^2}}$$\n\nWhere $x$ is the spatial distance between the center pixel and the corresponding neighbor unit.\n\nFor a 1D kernel of size 7, each pixel would therefore be weighted accordingly:\n\n$$g(x) = \\begin{bmatrix}.011 & .13 & .6 & 1 & .6 & .13 & .011\\end{bmatrix}$$\n\nThe above should not be hard to intuit about, as if we refer back to the graphed distribution we can see that the center pixel (at position x=0) the $g(x)$ would evaluate to a value of $1$.\n\n```py\nimport numpy as np\nweights = []\nsd = 1\nfor i in range(4):\n    weights += [np.round(np.exp((-i**2)/(2*sd**2)),3)]\nprint(weights)\n# output:\n# [1.0, 0.607, 0.135, 0.011]\n```\n\nFor a 2D kernel, the formula would take the form of:\n$$g(x,y) = e^{\\frac{-(x^2+y^2)}{2\\sigma^2}}$$\n\nWhen we compare the output of a mean filter to a gaussian filter, as in the example script in `gaussianblur_01.py`, we can then observe the difference in output visually:\n\n![](assets/meanvsgaussian.png)\n\nThis should also come as little surprise, since the mean filter just replace each pixels with the average values of its neighboring pixels, essentially giving a coefficient of 1 (without normalized) to a grid of 5x5 pixels.  \n\nWhere on the other hand, gaussian filters **weigh pixels using a gaussian distribution** (think: bell curve in a 2d space) around the center pixel such that farther pixels are given a lower coefficient than nearer ones. \n\n\n#### Sharpening Kernels\nThe opposite of blurring would be sharpening. There are again several approaches to this, and we'll start by looking at specifically two of them.\n\nThe first approach relies on the familiar `cv2.filter2D()` function to perform the following kernel and is implemented in `sharpening_01.py`:\n$$K = \\begin{bmatrix} -1 & -1 & -1 \\\\ -1 & 9 & -1 \\\\ -1 & -1 & -1 \\end{bmatrix}$$\n\nThe outcome:\n![](assets/sharpen.png)\n\n\n##### Approximate Gaussian Kernel for Sharpening\nWe can apply the same principles behind a Gaussian kernel for sharpening operations (as opposed to blurring). The full script is in `sharpening_02.py` but the essential parts are as follow:\n\n```py\napprox_gaussian = (\n    np.array(\n        [\n            [-1, -1, -1, -1, -1],\n            [-1, 2, 2, 2, -1],\n            [-1, 2, 8, 2, -1],\n            [-1, 2, 2, 2, -1],\n            [-1, -1, -1, -1, -1],\n        ]\n    )/ 8.0\n)\nsharpen_col = cv2.filter2D(img, -1, approx_gaussian)\n```\n\nNotice how this method uses an approximate Gaussian kernel and that the result is an overall more natural smoothing:\n![](assets/gaussiansharpen.png)\n\n##### Unsharp Masking\nThe second approach is known as \"unsharp masking\", derived from that fact that the technique uses a blurred, or \"unsharp\", negative image to create a mask of the original image[^7]. This technique is one of the oldest tool in photographic processing (tracing back to 1930s) and popular tools such as Adobe Photoshop and GIMP have direct implementations of it named, appropriately, Unsharp Mask. \n\nLifted straight from the Wikipedia article itself, a \"typical blending formula for unsharp masking is **sharpened = original + (original - blurred) * amount**\". **Amount** represents how much contrast is added to the edges.\n\nTo rewrite the formula, we get:\n$$\\begin{aligned}\nSharpened & = O + (O-B) \\cdot a \\\\\n& = O + Oa - Ba \\\\\n& = O (1+a) + B(-a)\\end{aligned}$$\n\nWhere $a$ is the amount, $B$ is the blurred image (mask) and $O$ is the original image. The final form is convenient because we can plug it into `cv2.addWeighted` and get an output. From OpenCV's documentation, the function `addWeighted` calculates the weighted sum of two arrays as follows:\n$$dst(I) = saturate(src1(I) * alpha + src2(I) * beta + gamma)$$\n\nWhen you perform the arithmetic above, you will find that the values (eg. `src1(I) * alpha` when alpha is > 1.5 will produce values greater than 255) may fall outside the range of 0 and 255. Saturation clips the value in a way that is synonymous to the following:\n\n$$Saturate(x) = min(max(round(r), 0), 255)$$\n\nThe following code demonstrates the unsharp masking technique:\n```py\nimg = cv2.imread(\"assets/sarpi.png\")\n\namt = 1.5\nblurred = cv2.GaussianBlur(img, (5,5), 10)\nunsharp = cv2.addWeighted(img, 1+amt, blurred, -amt, 0)\nunsharp_manual = np.clip(img * (1+amt) + blurred * (-amt), 0, 255)\ncv2.imshow(\"Unsharp Masking\", unsharp)\n```\n\n![](assets/unsharpsarpi.png)\nYou can find the sample code for this in `unsharpmask_01.py` (using `addWeighted`) and in `unsharpmask_02.py` (manual calculation) respectively.\n\n## Summary and Key Points\nWhy go to such lengths on the mathematical ideas behind image filtering operations?\n\n> Filtering is perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term \"filtering\", the value of the filtered image at a given location is a function of the values of the input image in a small neighborhood of the same location.[^8]\n\nIt is fundamental to a host of common image processing techniques, from enhancements (sharpening, denoise, increase / reduce contrast), to edge detection, and texture detection, and in the case of deep learning, feature detections. \n\nTo help with your recall, I made a simple illustration below:\n\n![](assets/gaussiankernel.png)\n\nWhenever you're ready, move on to `edgedetect.md` to learn the essentials of edge detection using kernel operations. \n\n## References\n[^1]: Making your own linear filters, [OpenCV Documentation](https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/filter_2d/filter_2d.html)\n\n[^2]: Bradski, Kaehler, Learning OpenCV\n\n[^3]: Stacks Exchange, https://stats.stackexchange.com/a/366940\n\n[^4]: [OpenCV Documentation](http://docs.opencv.org/modules/imgproc/doc/filtering.html#filter2d)\n\n[^5]: R.Zadeh and B.Ramsundar, TensorFlow for Deep Learning, O'Reilly Media\n\n[^6]: Wikipedia, Gaussian function, https://en.wikipedia.org/wiki/Gaussian_function\n\n[^7]: W.Fulton, A few scanning tips, Sharpening - Unsharp Mask\n\n[^8]: C. Tomasi and R. Manduchi, \"Bilateral Filtering for Gray and Color Images\", Proceedings of the 1998 IEEE International Conference on Computer Vision, Bombay, India.\n"
  },
  {
    "path": "edgedetect/meanblur_01.py",
    "content": "import numpy as np\nimport cv2\n\nKERNEL_SIZE = (5, 5)\n\nimg = cv2.imread(\"assets/canal.png\")\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\ncv2.imshow(\"Gray\", gray)\ncv2.waitKey(0)\n\n# Create the following 5x5 \n# np.array(\n# [[0.04, 0.04, 0.04, 0.04, 0.04],\n# [0.04, 0.04, 0.04, 0.04, 0.04],\n# [0.04, 0.04, 0.04, 0.04, 0.04],\n# [0.04, 0.04, 0.04, 0.04, 0.04],\n# [0.04, 0.04, 0.04, 0.04, 0.04]])\n\nmean_blur = np.ones(KERNEL_SIZE, dtype=\"float32\") * (1.0 / (5 ** 2))\nsmoothed_col = cv2.filter2D(img, -1, mean_blur)\nsmoothed_gray = cv2.filter2D(gray, -1, mean_blur)\n\ncv2.imshow(\"Smoothed Colored\", smoothed_col)\ncv2.waitKey(0)\n\ncv2.imshow(\"Smoothed Gray\", smoothed_gray)\ncv2.waitKey(0)\n"
  },
  {
    "path": "edgedetect/meanblur_02.py",
    "content": "import numpy as np\nimport cv2\n\nKERNEL_SIZE = (5, 5)\n\nimg = cv2.imread(\"assets/canal.png\")\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\nprint(f'Gray: {gray[:5, :5]}')\nprint(f'Shape of Original: {gray.shape}')\n\nfor i in range(3):\n    newval = np.round(np.mean(gray[:5, i:i+5]))\n    print(f'Mean of 25x25 pixel #{i+1}: {np.int(newval)}')\n\ncv2.imshow(\"Gray\", gray)\ncv2.waitKey(0)\n\nmean_blur = np.ones(KERNEL_SIZE, dtype=\"float32\") * (1.0 / (5 ** 2))\nsmoothed_col = cv2.filter2D(img, -1, mean_blur)\nsmoothed_gray = cv2.filter2D(gray, -1, mean_blur)\n\ncv2.imshow(\"Smoothed Colored\", smoothed_col)\ncv2.waitKey(0)\n\ncv2.imshow(\"Smoothed Gray\", smoothed_gray)\ncv2.waitKey(0)\nprint(f'Smoothed: {smoothed_gray[:5, :5]}')\nprint(f'Shape of Smoothed: {smoothed_gray.shape}')\n"
  },
  {
    "path": "edgedetect/meanblur_03.py",
    "content": "import numpy as np\nimport cv2\n\nKERNEL_SIZE = (5, 5)\n\nimg = cv2.imread(\"assets/canal.png\")\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\nprint(f'Gray: {gray[:5, :5]}')\nprint(f'Shape of Original: {gray.shape}')\n\nfor i in range(3):\n    newval = np.round(np.mean(gray[:5, i:i+5]))\n    print(f'Mean of 25x25 pixel #{i+1}: {np.int(newval)}')\n\ncv2.imshow(\"Gray\", gray)\ncv2.waitKey(0)\n\nsmoothed_col = cv2.blur(img, KERNEL_SIZE)\n\n# equivalently:\n# smoothed_gray = cv2.boxFilter(gray, -1, KERNEL_SIZE)\nsmoothed_gray = cv2.blur(gray, KERNEL_SIZE)\n\ncv2.imshow(\"Smoothed Colored\", smoothed_col)\ncv2.waitKey(0)\n\ncv2.imshow(\"Smoothed Gray\", smoothed_gray)\ncv2.waitKey(0)\nprint(f'Smoothed: {smoothed_gray[:5, :5]}')\nprint(f'Shape of Smoothed: {smoothed_gray.shape}')\n"
  },
  {
    "path": "edgedetect/sharpening_01.py",
    "content": "import numpy as np\nimport cv2\n\nimg = cv2.imread(\"assets/canal.png\")\ngray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\nfor i in range(3):\n    newval = np.round(np.mean(gray[:5, i : i + 5]))\n    print(f\"Mean of 25x25 pixel #{i+1}: {np.int(newval)}\")\n\ncv2.imshow(\"Gray\", gray)\ncv2.waitKey(0)\n\nsharpen = np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]])\nsharpen_col = cv2.filter2D(img, -1, sharpen)\nsharpen_gray = cv2.filter2D(gray, -1, sharpen)\n\ncv2.imshow(\"Sharpen Colored\", sharpen_col)\ncv2.waitKey(0)\n\ncv2.imshow(\"Sharpen Gray\", sharpen_gray)\ncv2.waitKey(0)\n\n"
  },
  {
    "path": "edgedetect/sharpening_02.py",
    "content": "import numpy as np\nimport cv2\n\nimg = cv2.imread(\"assets/canal.png\")\n\ncv2.imshow(\"Original\", img)\ncv2.waitKey(0)\n\napprox_gaussian = (\n    np.array(\n        [\n            [-1, -1, -1, -1, -1],\n            [-1, 2, 2, 2, -1],\n            [-1, 2, 8, 2, -1],\n            [-1, 2, 2, 2, -1],\n            [-1, -1, -1, -1, -1],\n        ]\n    )\n    / 8.0\n)\nsharpen_col = cv2.filter2D(img, -1, approx_gaussian)\n\ncv2.imshow(\"Sharpen (approx. Gaussian)\", sharpen_col)\ncv2.waitKey(0)\ncv2.waitKey(0)\n\n"
  },
  {
    "path": "edgedetect/sobel_01.py",
    "content": "import numpy as np\nimport cv2\nimport matplotlib.pyplot as plt\n\nimg = cv2.imread(\"assets/sudoku.jpg\", 0)\nimg = cv2.medianBlur(img, 5)\nimg = cv2.GaussianBlur(img, (7, 7), 0)\ncv2.imshow(\"Image\", img)\ncv2.waitKey(0)\n\ngradient_x = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=3)\ngradient_y = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=3)\nprint(f\"Range: {np.min(gradient_x)} | {np.max(gradient_x)}\")\n\ngradient_x = np.uint8(np.absolute(gradient_x))\ngradient_y = np.uint8(np.absolute(gradient_y))\nprint(f\"Range uint8: {np.min(gradient_x)} | {np.max(gradient_x)}\")\n\ncv2.imshow(\"Gradient X\", gradient_x)\ncv2.waitKey(0)\ncv2.imshow(\"Gradient Y\", gradient_y)\ncv2.waitKey(0)\n\n# plt.imshow(gradient_x, cmap=\"gray\")\n# plt.show()\n\n"
  },
  {
    "path": "edgedetect/sobel_02.py",
    "content": "import numpy as np\nimport cv2\nimport matplotlib.pyplot as plt\n\nimg_original = cv2.imread(\"assets/castello.png\")\nimg_original = cv2.cvtColor(img_original, cv2.COLOR_BGR2RGB)\nimg = cv2.cvtColor(img_original, cv2.COLOR_BGR2GRAY)\nimg = cv2.medianBlur(img, 9)\nimg = cv2.GaussianBlur(img, (9, 9), 0)\n\ngradient_x = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=3)\ngradient_y = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=3)\n\ngradient_x = cv2.convertScaleAbs(gradient_x)\ngradient_y = cv2.convertScaleAbs(gradient_y)\nprint(f\"Range: {np.min(gradient_x)} | {np.max(gradient_x)}\")\n\ngradient_xy = cv2.addWeighted(gradient_x, 0.5, gradient_y, 0.5, 0)\n\nplt.subplot(2, 2, 1), plt.imshow(img_original)\nplt.title(\"Original\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 2), plt.imshow(gradient_x, cmap=\"gray\")\nplt.title(\"Gradient X\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 3), plt.imshow(gradient_y, cmap=\"gray\")\nplt.title(\"Gradient Y\"), plt.xticks([]), plt.yticks([])\nplt.subplot(2, 2, 4), plt.imshow(gradient_xy, cmap=\"gray\")\nplt.title(\"Gradient X and Y\"), plt.xticks([]), plt.yticks([])\nplt.show()\n"
  },
  {
    "path": "edgedetect/sobel_03.py",
    "content": "import numpy as np\nimport cv2\nimport matplotlib.pyplot as plt\n\nimg = cv2.imread(\"assets/castello.png\", flags=0)\nimg = cv2.medianBlur(img, 9)\nimg = cv2.GaussianBlur(img, (9, 9), 0)\n\ngradient_x = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=3)\ngradient_y = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=3)\n\ngradient_x = cv2.convertScaleAbs(gradient_x)\ngradient_y = cv2.convertScaleAbs(gradient_y)\nprint(f\"Range: {np.min(gradient_x)} | {np.max(gradient_x)}\")\n\ngradient_xy = cv2.addWeighted(gradient_x, 0.5, gradient_y, 0.5, 0)\n\nplt.imshow(gradient_xy, cmap=\"gray\")\nplt.title(\"Sobel Edge\")\nplt.show()"
  },
  {
    "path": "edgedetect/unsharpmask_01.py",
    "content": "import numpy as np\nimport cv2\n\nKERNEL_SIZE = (5, 5)\n\nimg = cv2.imread(\"assets/sarpi.png\")\ncv2.imshow(\"Original\", img)\ncv2.waitKey(0)\n\namt = 1.5\nblurred = cv2.GaussianBlur(img, (5,5), 10)\nunsharp = cv2.addWeighted(img, 1+amt, blurred, -amt, 0)\n\ncv2.imshow(\"Unsharp Masking\", unsharp)\ncv2.waitKey(0)\n\n\n"
  },
  {
    "path": "edgedetect/unsharpmask_02.py",
    "content": "import numpy as np\nimport cv2\n\nKERNEL_SIZE = (5, 5)\n\nimg = cv2.imread(\"assets/sarpi.png\")\ncv2.imshow(\"Original\", img)\ncv2.waitKey(0)\n\namt = 1.5\nblurred = cv2.GaussianBlur(img, (5,5), 10)\n\nunsharp_manual = np.clip(img * (1+amt) + blurred * (-amt), 0, 255)\n# unsharp_manual = img * (1+amt) + blurred * (-amt)\n# unsharp_manual = np.maximum(unsharp_manual, np.zeros(unsharp_manual.shape))\n# unsharp_manual = np.minimum(unsharp_manual, 255 * np.ones(unsharp_manual.shape))\nunsharp_manual = unsharp_manual.round().astype(np.uint8)\n\ncv2.imshow(\"Unsharp Masking Manual\", unsharp_manual)\ncv2.waitKey(0)\n\n"
  },
  {
    "path": "edgedetect/utils/gaussiancurve.r",
    "content": "x <- seq(-3, 3, length=1000000)\ny <- dnorm(x, mean=0, sd=1)\nplot(x, y, type=\"l\", lwd=1, ylab=\"g(x)\")"
  },
  {
    "path": "quiz.md",
    "content": "## Affine Transformation\n\n1. Which of the following constructs the correct transformation matrix to perform a 2x scaling? \n    - [ ] `np.float32([[2, 0, 0], [0, 2, 0]])`\n    - [ ] `np.float32([[0, 2, 0], [0, 2, 0]])`\n    - [ ] `np.float32([[2, 2, 2], [0, 0, 0]])`\n    - [ ] `np.float32([[2, 1, 1], [1, 2, 1]])`\n\n2. In the case on a 333x333 input image, with a strides of 1 using a kernel of size 5*5, what is the amount of zero-padding you should add to the borders of your image such that the output image is also 333x333?\n    - [ ] 1\n    - [ ] 2\n    - [ ] 3\n    - [ ] No zero-padding\n\n## Kernels and Convolution\n\n3. For an input image of size 140W (Width) x 600H (Height), supposed we perform a convolution with slide S=1 using a filter of size 7W x 7H and two pixels of constant-padding (padding our image with a constant value of 5), what would the dimension of our image be?\n    - [ ] 135 Width x 595 Height\n    - [ ] 140 Width x 600 Height\n    - [ ] 138 Width x 598 Height\n    - [ ] None of the answers above \n\n## Tresholding Edge Detection\n4. In an image with lighting conditions that result in some parts of the image being shaded differently than the others, which of the thresholding techniques may yield a more robust output?\n    - [ ] Pixel-intensity based thresholding\n    - [ ] Otsu's global thresholding method\n    - [ ] Adaptive thresholding\n\n5. We want to retrieve only the extreme outer contours. We do not need to store all the boundary points to minimise redundancy and save memory requirements. Which are the values to be passed into the findContours() function?\n    - [ ] RETR_EXTERNAL, CHAIN_APPROX_SIMPLE\n    - [ ] RETR_EXTERNAL, CHAIN_APPROX_NONE\n    - [ ] RETR_OUTER, CHAIN_APPROX_SIMPLE\n    - [ ] RETR_OUTER, CHAIN_APPROX_NONE\n    - [ ] RETR_LIST, CHAIN_APPROX_NONE\n\n6. The function call cv2.Canny(img, 50, 180) will determine which of the intensity gradients as definite edges?\n    - [ ] 40\n    - [ ] 100\n    - [ ] 200\n\n7. Which of the following is NOT part of the Canny Edge procedure?\n    - [ ] Compute gradient in each direction\n    - [ ] Suppress edges that are non-maximal\n    - [ ] Discard pixels that are more likely noise than true edges\n    - [ ] Retrieve only the extreme outer contours from the edges"
  },
  {
    "path": "requirements.txt",
    "content": "cycler==0.10.0  \ndecorator==4.4.1   \nimageio==2.6.1   \nimutils==0.5.3   \njoblib==0.14.0  \nkiwisolver==1.1.0   \nmahotas==1.4.9   \nmatplotlib==3.1.1   \nnetworkx==2.4     \nnumpy==1.17.4  \nopencv-contrib-python==4.1.1.26\nPillow==8.1.1   \npip==21.1  \npyparsing==2.4.5   \npython-dateutil==2.8.1   \nPyWavelets==1.1.1   \nscikit-image==0.16.2  \nscikit-learn==0.21.3  \nscipy==1.3.2   \nsetuptools==41.6.0  \nsix==1.13.0  \nwheel==0.33.6  \n"
  },
  {
    "path": "summarynotes/class2201.md",
    "content": "# Computer Vision (Chapter 1 to 3)\n\n## Administrative Details\n- Prerequisites:\n    - Python 3\n    - OpenCV\n    - Numpy (automatically installed as dependency to opencv)\n    - Tip: Use `pip install -r requirements.txt` to install from the requirement file (`requirements.txt`) in the repo. Get help from Teaching Assistant (Tommy) or myself before the beginning of the class\n\n- Any code editor\n    - Atom, VSCode, Sublime etc... \n    - Personally, I use VSCode (free)\n\n- Materials\n    - https://github.com/onlyphantom/cvessentials\n\n- WiFi \n    - Network: Accelerice\n    - Password: gapura19 \n\n## Day 1\n1. Synonymous role to data preprocessing\nData Analysis\n    - Read data (usually using pandas as pd)\n    - Inspect your data (dat.shape)\n    - Data Preprocessing\n        - Reshape, ...\n\n2. Basic Routine\n    ```\n    import cv2\n    import numpy as np\n\n    img = cv2.imread(\"Desktop/family.png\")\n    print(img.shape) # output: (h, w, c)\n    \n    gray = cv2.cvtColor(img, cv2.BGR2GRAY)\n    cv2.imshow(\"Gray Image\", gray)\n    cv2.waitKey(0)\n    ```\n\n3. Affine Transformation\n    ```\n    import cv2\n    import numpy as np\n\n    img = cv2.imread(\"Desktop/family.png\")\n    (h, w, c) = img.shape\n    print(f'Height: {h}; Width: {w}')\n\n    gray = cv2.cvtColor(img, cv2.BGR2GRAY)\n\n    # option 1: create 2x3 matrix\n    mat = np.float32([[1, 0, 0], [0, 1, 0]])\n    # option 2: ask for a 2x3 matrix\n    mat = cv2.getRotationMatrix2D(center, angle=180, scale=1)\n    mat = cv2.getAffineTransform(src, dst)\n\n    transformed = cv2.warpAffine(gray, mat, dsize=(h,w))\n\n    cv2.imshow(\"Transformed\", transformed)\n    cv2.waitKey(0)\n    ```"
  },
  {
    "path": "transformation/lecture_affine.html",
    "content": "<!DOCTYPE html><html><head>\n      <title>lecture_affine</title>\n      <meta charset=\"utf-8\">\n      <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n      \n      <link rel=\"stylesheet\" href=\"file:////Users/samuel/.vscode/extensions/shd101wyy.markdown-preview-enhanced-0.5.0/node_modules/@shd101wyy/mume/dependencies/katex/katex.min.css\">\n      \n      \n\n      \n      \n      \n      \n      \n      \n      \n\n      <style>\n      /**\n * prism.js Github theme based on GitHub's theme.\n * @author Sam Clarke\n */\ncode[class*=\"language-\"],\npre[class*=\"language-\"] {\n  color: #333;\n  background: none;\n  font-family: Consolas, \"Liberation Mono\", Menlo, Courier, monospace;\n  text-align: left;\n  white-space: pre;\n  word-spacing: normal;\n  word-break: normal;\n  word-wrap: normal;\n  line-height: 1.4;\n\n  -moz-tab-size: 8;\n  -o-tab-size: 8;\n  tab-size: 8;\n\n  -webkit-hyphens: none;\n  -moz-hyphens: none;\n  -ms-hyphens: none;\n  hyphens: none;\n}\n\n/* Code blocks */\npre[class*=\"language-\"] {\n  padding: .8em;\n  overflow: auto;\n  /* border: 1px solid #ddd; */\n  border-radius: 3px;\n  /* background: #fff; */\n  background: #f5f5f5;\n}\n\n/* Inline code */\n:not(pre) > code[class*=\"language-\"] {\n  padding: .1em;\n  border-radius: .3em;\n  white-space: normal;\n  background: #f5f5f5;\n}\n\n.token.comment,\n.token.blockquote {\n  color: #969896;\n}\n\n.token.cdata {\n  color: #183691;\n}\n\n.token.doctype,\n.token.punctuation,\n.token.variable,\n.token.macro.property {\n  color: #333;\n}\n\n.token.operator,\n.token.important,\n.token.keyword,\n.token.rule,\n.token.builtin {\n  color: #a71d5d;\n}\n\n.token.string,\n.token.url,\n.token.regex,\n.token.attr-value {\n  color: #183691;\n}\n\n.token.property,\n.token.number,\n.token.boolean,\n.token.entity,\n.token.atrule,\n.token.constant,\n.token.symbol,\n.token.command,\n.token.code {\n  color: #0086b3;\n}\n\n.token.tag,\n.token.selector,\n.token.prolog {\n  color: #63a35c;\n}\n\n.token.function,\n.token.namespace,\n.token.pseudo-element,\n.token.class,\n.token.class-name,\n.token.pseudo-class,\n.token.id,\n.token.url-reference .token.variable,\n.token.attr-name {\n  color: #795da3;\n}\n\n.token.entity {\n  cursor: help;\n}\n\n.token.title,\n.token.title .token.punctuation {\n  font-weight: bold;\n  color: #1d3e81;\n}\n\n.token.list {\n  color: #ed6a43;\n}\n\n.token.inserted {\n  background-color: #eaffea;\n  color: #55a532;\n}\n\n.token.deleted {\n  background-color: #ffecec;\n  color: #bd2c00;\n}\n\n.token.bold {\n  font-weight: bold;\n}\n\n.token.italic {\n  font-style: italic;\n}\n\n\n/* JSON */\n.language-json .token.property {\n  color: #183691;\n}\n\n.language-markup .token.tag .token.punctuation {\n  color: #333;\n}\n\n/* CSS */\ncode.language-css,\n.language-css .token.function {\n  color: #0086b3;\n}\n\n/* YAML */\n.language-yaml .token.atrule {\n  color: #63a35c;\n}\n\ncode.language-yaml {\n  color: #183691;\n}\n\n/* Ruby */\n.language-ruby .token.function {\n  color: #333;\n}\n\n/* Markdown */\n.language-markdown .token.url {\n  color: #795da3;\n}\n\n/* Makefile */\n.language-makefile .token.symbol {\n  color: #795da3;\n}\n\n.language-makefile .token.variable {\n  color: #183691;\n}\n\n.language-makefile .token.builtin {\n  color: #0086b3;\n}\n\n/* Bash */\n.language-bash .token.keyword {\n  color: #0086b3;\n}\n\n/* highlight */\npre[data-line] {\n  position: relative;\n  padding: 1em 0 1em 3em;\n}\npre[data-line] .line-highlight-wrapper {\n  position: absolute;\n  top: 0;\n  left: 0;\n  background-color: transparent;\n  display: block;\n  width: 100%;\n}\n\npre[data-line] .line-highlight {\n  position: absolute;\n  left: 0;\n  right: 0;\n  padding: inherit 0;\n  margin-top: 1em;\n  background: hsla(24, 20%, 50%,.08);\n  background: linear-gradient(to right, hsla(24, 20%, 50%,.1) 70%, hsla(24, 20%, 50%,0));\n  pointer-events: none;\n  line-height: inherit;\n  white-space: pre;\n}\n\npre[data-line] .line-highlight:before, \npre[data-line] .line-highlight[data-end]:after {\n  content: attr(data-start);\n  position: absolute;\n  top: .4em;\n  left: .6em;\n  min-width: 1em;\n  padding: 0 .5em;\n  background-color: hsla(24, 20%, 50%,.4);\n  color: hsl(24, 20%, 95%);\n  font: bold 65%/1.5 sans-serif;\n  text-align: center;\n  vertical-align: .3em;\n  border-radius: 999px;\n  text-shadow: none;\n  box-shadow: 0 1px white;\n}\n\npre[data-line] .line-highlight[data-end]:after {\n  content: attr(data-end);\n  top: auto;\n  bottom: .4em;\n}html body{font-family:\"Helvetica Neue\",Helvetica,\"Segoe UI\",Arial,freesans,sans-serif;font-size:16px;line-height:1.6;color:#333;background-color:#fff;overflow:initial;box-sizing:border-box;word-wrap:break-word}html body>:first-child{margin-top:0}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{line-height:1.2;margin-top:1em;margin-bottom:16px;color:#000}html body h1{font-size:2.25em;font-weight:300;padding-bottom:.3em}html body h2{font-size:1.75em;font-weight:400;padding-bottom:.3em}html body h3{font-size:1.5em;font-weight:500}html body h4{font-size:1.25em;font-weight:600}html body h5{font-size:1.1em;font-weight:600}html body h6{font-size:1em;font-weight:600}html body h1,html body h2,html body h3,html body h4,html body h5{font-weight:600}html body h5{font-size:1em}html body h6{color:#5c5c5c}html body strong{color:#000}html body del{color:#5c5c5c}html body a:not([href]){color:inherit;text-decoration:none}html body a{color:#08c;text-decoration:none}html body a:hover{color:#00a3f5;text-decoration:none}html body img{max-width:100%}html body>p{margin-top:0;margin-bottom:16px;word-wrap:break-word}html body>ul,html body>ol{margin-bottom:16px}html body ul,html body ol{padding-left:2em}html body ul.no-list,html body ol.no-list{padding:0;list-style-type:none}html body ul ul,html body ul ol,html body ol ol,html body ol ul{margin-top:0;margin-bottom:0}html body li{margin-bottom:0}html body li.task-list-item{list-style:none}html body li>p{margin-top:0;margin-bottom:0}html body .task-list-item-checkbox{margin:0 .2em .25em -1.8em;vertical-align:middle}html body .task-list-item-checkbox:hover{cursor:pointer}html body blockquote{margin:16px 0;font-size:inherit;padding:0 15px;color:#5c5c5c;border-left:4px solid #d6d6d6}html body blockquote>:first-child{margin-top:0}html body blockquote>:last-child{margin-bottom:0}html body hr{height:4px;margin:32px 0;background-color:#d6d6d6;border:0 none}html body table{margin:10px 0 15px 0;border-collapse:collapse;border-spacing:0;display:block;width:100%;overflow:auto;word-break:normal;word-break:keep-all}html body table th{font-weight:bold;color:#000}html body table td,html body table th{border:1px solid #d6d6d6;padding:6px 13px}html body dl{padding:0}html body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:bold}html body dl dd{padding:0 16px;margin-bottom:16px}html body code{font-family:Menlo,Monaco,Consolas,'Courier New',monospace;font-size:.85em !important;color:#000;background-color:#f0f0f0;border-radius:3px;padding:.2em 0}html body code::before,html body code::after{letter-spacing:-0.2em;content:\"\\00a0\"}html body pre>code{padding:0;margin:0;font-size:.85em !important;word-break:normal;white-space:pre;background:transparent;border:0}html body .highlight{margin-bottom:16px}html body .highlight pre,html body pre{padding:1em;overflow:auto;font-size:.85em !important;line-height:1.45;border:#d6d6d6;border-radius:3px}html body .highlight pre{margin-bottom:0;word-break:normal}html body pre code,html body pre tt{display:inline;max-width:initial;padding:0;margin:0;overflow:initial;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}html body pre code:before,html body pre tt:before,html body pre code:after,html body pre tt:after{content:normal}html body p,html body blockquote,html body ul,html body ol,html body dl,html body pre{margin-top:0;margin-bottom:16px}html body kbd{color:#000;border:1px solid #d6d6d6;border-bottom:2px solid #c7c7c7;padding:2px 4px;background-color:#f0f0f0;border-radius:3px}@media print{html body{background-color:#fff}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{color:#000;page-break-after:avoid}html body blockquote{color:#5c5c5c}html body pre{page-break-inside:avoid}html body table{display:table}html body img{display:block;max-width:100%;max-height:100%}html body pre,html body code{word-wrap:break-word;white-space:pre}}.markdown-preview{width:100%;height:100%;box-sizing:border-box}.markdown-preview .pagebreak,.markdown-preview .newpage{page-break-before:always}.markdown-preview pre.line-numbers{position:relative;padding-left:3.8em;counter-reset:linenumber}.markdown-preview pre.line-numbers>code{position:relative}.markdown-preview pre.line-numbers .line-numbers-rows{position:absolute;pointer-events:none;top:1em;font-size:100%;left:0;width:3em;letter-spacing:-1px;border-right:1px solid #999;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.markdown-preview pre.line-numbers .line-numbers-rows>span{pointer-events:none;display:block;counter-increment:linenumber}.markdown-preview pre.line-numbers .line-numbers-rows>span:before{content:counter(linenumber);color:#999;display:block;padding-right:.8em;text-align:right}.markdown-preview .mathjax-exps .MathJax_Display{text-align:center !important}.markdown-preview:not([for=\"preview\"]) .code-chunk .btn-group{display:none}.markdown-preview:not([for=\"preview\"]) .code-chunk .status{display:none}.markdown-preview:not([for=\"preview\"]) .code-chunk .output-div{margin-bottom:16px}.scrollbar-style::-webkit-scrollbar{width:8px}.scrollbar-style::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}.scrollbar-style::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for=\"html-export\"]:not([data-presentation-mode]){position:relative;width:100%;height:100%;top:0;left:0;margin:0;padding:0;overflow:auto}html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{position:relative;top:0}@media screen and (min-width:914px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{padding:2em calc(50% - 457px + 2em)}}@media screen and (max-width:914px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for=\"html-export\"]:not([data-presentation-mode]) .markdown-preview{font-size:14px !important;padding:1em}}@media print{html body[for=\"html-export\"]:not([data-presentation-mode]) #sidebar-toc-btn{display:none}}html body[for=\"html-export\"]:not([data-presentation-mode]) #sidebar-toc-btn{position:fixed;bottom:8px;left:8px;font-size:28px;cursor:pointer;color:inherit;z-index:99;width:32px;text-align:center;opacity:.4}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] #sidebar-toc-btn{opacity:1}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc{position:fixed;top:0;left:0;width:300px;height:100%;padding:32px 0 48px 0;font-size:14px;box-shadow:0 0 4px rgba(150,150,150,0.33);box-sizing:border-box;overflow:auto;background-color:inherit}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar{width:8px}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc a{text-decoration:none}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{padding:0 1.6em;margin-top:.8em}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc li{margin-bottom:.8em}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{list-style-type:none}html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{left:300px;width:calc(100% -  300px);padding:2em calc(50% - 457px -  150px);margin:0;box-sizing:border-box}@media screen and (max-width:1274px){html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for=\"html-export\"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{width:100%}}html body[for=\"html-export\"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .markdown-preview{left:50%;transform:translateX(-50%)}html body[for=\"html-export\"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .md-sidebar-toc{display:none}\n/* Please visit the URL below for more information: */\n/*   https://shd101wyy.github.io/markdown-preview-enhanced/#/customize-css */\n.markdown-preview.markdown-preview {\n  font-size: 0.8rem;\n  line-height: 1.2rem;\n}\n.markdown-preview.markdown-preview pre {\n  font-size: 0.7rem;\n}\n.markdown-preview.markdown-preview h1 {\n  font-size: 1.4rem;\n  margin-bottom: 1%;\n}\n.markdown-preview.markdown-preview h2 {\n  font-size: 1.1rem;\n  margin-bottom: 1%;\n}\n.markdown-preview.markdown-preview h3,\n.markdown-preview.markdown-preview h4,\n.markdown-preview.markdown-preview h5,\n.markdown-preview.markdown-preview h6 {\n  margin-bottom: 1%;\n}\n\n      </style>\n    </head>\n    <body for=\"html-export\">\n      <div class=\"mume markdown-preview  \">\n      <div><h1 class=\"mume-header\" id=\"affine-transformation\">Affine Transformation</h1>\n\n<h2 class=\"mume-header\" id=\"definition\">Definition</h2>\n\n<p>Any transformation that can be expressed in the form of a <em>matrix multiplication</em> (linear transformation) followed by a <em>vector addition</em> (translation).</p>\n<p><span class=\"katex-display\"><span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>T</mi><mo>=</mo><mi>A</mi><mo>&#x22C5;</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>y</mi></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow><mo>+</mo><mi>B</mi></mrow><annotation encoding=\"application/x-tex\">T = A \\cdot \\begin{bmatrix} x \\\\ y \\end{bmatrix} + B</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.13889em;\">T</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\">A</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">&#x22C5;</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">x</span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.05017em;\">B</span></span></span></span></span></p>\n<p>In which:</p>\n<p><span class=\"katex-display\"><span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>A</mi><mo>=</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>00</mn></msub></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>01</mn></msub></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>10</mn></msub></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>11</mn></msub></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow><mo separator=\"true\">;</mo><mi>B</mi><mo>=</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>b</mi><mn>00</mn></msub></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>b</mi><mn>10</mn></msub></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow></mrow><annotation encoding=\"application/x-tex\">A = \\begin{bmatrix} a_{00} &amp; a_{01} \\\\ a_{10} &amp; a_{11} \\end{bmatrix};   B = \\begin{bmatrix} b_{00} \\\\ b_{10} \\end{bmatrix}</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\">A</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">1</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">1</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span><span class=\"mspace\" style=\"margin-right:0.16666666666666666em;\"></span><span class=\"mpunct\">;</span><span class=\"mspace\" style=\"margin-right:0.16666666666666666em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.05017em;\">B</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">b</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">b</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span></span></span></span></span></p>\n<p>When concatenated horizontally, this can be expressed in a larger Matrix:</p>\n<p><span class=\"katex-display\"><span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>M</mi><mo>=</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>A</mi></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>B</mi></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow><mo>=</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>00</mn></msub></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>01</mn></msub></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>b</mi><mn>00</mn></msub></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>10</mn></msub></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>11</mn></msub></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>b</mi><mn>10</mn></msub></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow></mrow><annotation encoding=\"application/x-tex\">M = \\begin{bmatrix} A &amp; B \\end{bmatrix} = \\begin{bmatrix} a_{00} &amp; a_{01} &amp; b_{00} \\\\  a_{10} &amp; a_{11} &amp; b_{10} \\end{bmatrix}</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.10903em;\">M</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:1.20001em;vertical-align:-0.35001em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size1\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.8500000000000001em;\"><span style=\"top:-3.01em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">A</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.35000000000000003em;\"><span></span></span></span></span></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.8500000000000001em;\"><span style=\"top:-3.01em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\" style=\"margin-right:0.05017em;\">B</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.35000000000000003em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size1\">]</span></span></span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">1</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">1</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">b</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">b</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span></span></span></span></span></p>\n<p>By the definition above (<em>matmul</em> + <em>vector addition</em>), affine transformation can be used to achieve:</p>\n<ul>\n<li>Scaling (linear transformation)</li>\n<li>Rotations (linear transformation)</li>\n<li>Translations (vector additions)</li>\n</ul>\n<p>Affine transformation preserves points, straight lines, and planes. Parallel lines will remain parallel. It does not however preserve the distance and angles between points.</p>\n<p>We represent an Affine Transformation using a <strong>2x3 matrix</strong>.</p>\n<h3 class=\"mume-header\" id=\"mathematical-definitions\">Mathematical Definitions</h3>\n\n<p>Consider the goal of transforming a 2D vector <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>X</mi><mo>=</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>y</mi></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow></mrow><annotation encoding=\"application/x-tex\">X = \\begin{bmatrix} x \\\\ y \\end{bmatrix}</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.07847em;\">X</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">x</span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span></span></span></span> using <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>A</mi></mrow><annotation encoding=\"application/x-tex\">A</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\">A</span></span></span></span> and <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>B</mi></mrow><annotation encoding=\"application/x-tex\">B</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.05017em;\">B</span></span></span></span> to obtain <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>T</mi></mrow><annotation encoding=\"application/x-tex\">T</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.13889em;\">T</span></span></span></span>, we can do it like such:</p>\n<p><span class=\"katex-display\"><span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>T</mi><mo>=</mo><mi>A</mi><mo>&#x22C5;</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>y</mi></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow><mo>+</mo><mi>B</mi></mrow><annotation encoding=\"application/x-tex\">T = A \\cdot \\begin{bmatrix} x \\\\ y \\end{bmatrix} + B</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.13889em;\">T</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\">A</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">&#x22C5;</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">x</span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.05017em;\">B</span></span></span></span></span></p>\n<p>Or equivalently:</p>\n<p><span class=\"katex-display\"><span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>T</mi><mo>=</mo><mi>M</mi><mo>&#x22C5;</mo><mo stretchy=\"false\">[</mo><mi>x</mi><mo separator=\"true\">,</mo><mi>y</mi><mo separator=\"true\">,</mo><mn>1</mn><msup><mo stretchy=\"false\">]</mo><mi>T</mi></msup><mo>=</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mrow><msub><mi>a</mi><mn>00</mn></msub><mi>x</mi><mo>+</mo><msub><mi>a</mi><mn>01</mn></msub><mi>y</mi><mo>+</mo><msub><mi>b</mi><mn>00</mn></msub></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mrow><msub><mi>a</mi><mn>10</mn></msub><mi>x</mi><mo>+</mo><msub><mi>a</mi><mn>11</mn></msub><mi>y</mi><mo>+</mo><msub><mi>b</mi><mn>10</mn></msub></mrow></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow></mrow><annotation encoding=\"application/x-tex\">T = M \\cdot [x,y,1]^T = \\begin{bmatrix} \na_{00}x + a_{01}y + b_{00} \\\\ a_{10}x + a_{11}y + b_{10}  \\end{bmatrix}</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.13889em;\">T</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.10903em;\">M</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">&#x22C5;</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:1.1413309999999999em;vertical-align:-0.25em;\"></span><span class=\"mopen\">[</span><span class=\"mord mathdefault\">x</span><span class=\"mpunct\">,</span><span class=\"mspace\" style=\"margin-right:0.16666666666666666em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span><span class=\"mpunct\">,</span><span class=\"mspace\" style=\"margin-right:0.16666666666666666em;\"></span><span class=\"mord\">1</span><span class=\"mclose\"><span class=\"mclose\">]</span><span class=\"msupsub\"><span class=\"vlist-t\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.8913309999999999em;\"><span style=\"top:-3.113em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mathdefault mtight\" style=\"margin-right:0.13889em;\">T</span></span></span></span></span></span></span></span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span><span class=\"mord mathdefault\">x</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">1</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">b</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span><span class=\"mord mathdefault\">x</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">1</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">b</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span></span></span></span></span></p>\n<h4 class=\"mume-header\" id=\"practical-examples\">Practical Examples</h4>\n\n<p>In <code>scale_04.py</code> from the <strong>Examples and Illustrations</strong> section, you&apos;ll see that the  2x3 matrix <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>M</mi></mrow><annotation encoding=\"application/x-tex\">M</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.10903em;\">M</span></span></span></span> is simple defined as such:<br>\n<code>np.float32([[3, 0, 0], [0, 3, 0]])</code></p>\n<p>When you explicitly specify a 2x3 matrix, think of the first two columns as the <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>A</mi></mrow><annotation encoding=\"application/x-tex\">A</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\">A</span></span></span></span> component, or the matrix-multiplication process. The third column, naturally, represents the <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>B</mi></mrow><annotation encoding=\"application/x-tex\">B</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.05017em;\">B</span></span></span></span> component, or the vector addition process. This may sound a little abstract, so I encourage you to pause and take a look at the code below:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token punctuation\">(</span>h<span class=\"token punctuation\">,</span> w<span class=\"token punctuation\">)</span> <span class=\"token operator\">=</span> img<span class=\"token punctuation\">.</span>shape<span class=\"token punctuation\">[</span><span class=\"token punctuation\">:</span><span class=\"token number\">2</span><span class=\"token punctuation\">]</span>\nmat <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>float32<span class=\"token punctuation\">(</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">[</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">140</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">20</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\ntranslated <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>warpAffine<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> mat<span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>w<span class=\"token punctuation\">,</span> h<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\ncv2<span class=\"token punctuation\">.</span>imshow<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Translated&quot;</span><span class=\"token punctuation\">,</span> translated<span class=\"token punctuation\">)</span>\n</pre><p>Notice that our <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>A</mi></mrow><annotation encoding=\"application/x-tex\">A</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\">A</span></span></span></span> is an <em>identity matrix</em> of size 2. An identity matrix is the matrix equivalent of a scalar 1. Multiplying a matrix by its identity matrix doesn&apos;t change it by anything.</p>\n<p><span class=\"katex-display\"><span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>T</mi><mo>=</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow><mo>&#x22C5;</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>y</mi></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow><mo>+</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mrow><mo>&#x2212;</mo><mn>140</mn></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mn>20</mn></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow></mrow><annotation encoding=\"application/x-tex\">T  = \\begin{bmatrix} 1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix}  \\cdot \\begin{bmatrix} x \\\\ y \\end{bmatrix} + \\begin{bmatrix} -140 \\\\ 20 \\end{bmatrix}</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.13889em;\">T</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\">1</span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\">0</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\">0</span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\">1</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">&#x22C5;</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">x</span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\">&#x2212;</span><span class=\"mord\">1</span><span class=\"mord\">4</span><span class=\"mord\">0</span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\">2</span><span class=\"mord\">0</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span></span></span></span></span></p>\n<p>Which leads to:<br>\n<span class=\"katex-display\"><span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>T</mi><mo>=</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mrow><mn>1</mn><mo>&#x22C5;</mo><mi>x</mi><mo>+</mo><mn>0</mn><mo>&#x22C5;</mo><mi>y</mi><mo>&#x2212;</mo><mn>140</mn></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mrow><mn>0</mn><mo>&#x22C5;</mo><mi>x</mi><mo>+</mo><mn>1</mn><mo>&#x22C5;</mo><mi>y</mi><mo>+</mo><mn>20</mn></mrow></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow></mrow><annotation encoding=\"application/x-tex\">T  = \\begin{bmatrix} 1 \\cdot x + 0 \\cdot y -140 \\\\ 0 \\cdot x + 1 \\cdot y + 20 \\end{bmatrix}</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.13889em;\">T</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\">1</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">&#x22C5;</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord mathdefault\">x</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord\">0</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">&#x22C5;</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">&#x2212;</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord\">1</span><span class=\"mord\">4</span><span class=\"mord\">0</span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\">0</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">&#x22C5;</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord mathdefault\">x</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord\">1</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">&#x22C5;</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mord\">2</span><span class=\"mord\">0</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span></span></span></span></span></p>\n<p>And our <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>B</mi></mrow><annotation encoding=\"application/x-tex\">B</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.05017em;\">B</span></span></span></span>, the vector addition component, moves each pixel -- or more formally, translate each pixel -- on the image by -140 in the <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>x</mi></mrow><annotation encoding=\"application/x-tex\">x</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.43056em;vertical-align:0em;\"></span><span class=\"mord mathdefault\">x</span></span></span></span> direction and 20 on the <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>y</mi></mrow><annotation encoding=\"application/x-tex\">y</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.625em;vertical-align:-0.19444em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span></span></span></span> direction. Find the full code example on <code>translate_01.py</code>.</p>\n<h2 class=\"mume-header\" id=\"motivation\">Motivation</h2>\n\n<ol>\n<li>\n<p>Imaging systems in the real-world are often subject to <strong>geometric distortion</strong>. The distortion may be introduced by perspective irregularities, physical constraints (e.g camera placements), or other reasons.</p>\n</li>\n<li>\n<p>In the field of GIS (geographic information systems), routinely one would use affine transformation to &quot;convert&quot; geographic coordinates into screen coordinates such that it can <strong>be displayed and presented</strong> on our handheld / navigational devices.</p>\n</li>\n<li>\n<p>One may also overlay coordinate data on a spatial data that reference a different coordinate systems; Or to <strong>&quot;stitch&quot; together</strong> different sources of data using a series of transformation</p>\n</li>\n</ol>\n<p>These are but a handful of examples where one may expect to see routine use of affine transformations. If you&apos;re spending any amount of time in computer vision, a high degree of familiarity with these remapping routines in OpenCV will come in very handy.</p>\n<p>In your learn-by-building section, you will find a less-than-perfectly-digitalized map, <code>belitung_raw.jpg</code>. Your job is to apply what you&apos;ve apply the necessary affine transformation to correct its perspective distortion and the resize the map accordingly.</p>\n<h2 class=\"mume-header\" id=\"getting-affine-transformation\">Getting Affine Transformation</h2>\n\n<p>Given the importance of such a relation between two images, it should come as no surprise that <code>opencv</code> packs a number of convenience methods to help us specify this transformation. The two common use-cases are:</p>\n<ul>\n<li>\n<ol>\n<li>We <strong>specify</strong> our 2D vector representing the original image, <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>X</mi></mrow><annotation encoding=\"application/x-tex\">X</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.07847em;\">X</span></span></span></span> and our 2x3 transformation matrix <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>M</mi></mrow><annotation encoding=\"application/x-tex\">M</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.10903em;\">M</span></span></span></span> constructed in <code>numpy</code>.</li>\n</ol>\n<ul>\n<li>Example code:</li>\n</ul>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">img <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;our_image.png&quot;</span><span class=\"token punctuation\">)</span>\nmat <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>float32<span class=\"token punctuation\">(</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">[</span><span class=\"token number\">3</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">3</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\nresult <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>warpAffine<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> M<span class=\"token operator\">=</span>mat<span class=\"token punctuation\">,</span> dsize<span class=\"token operator\">=</span><span class=\"token punctuation\">(</span><span class=\"token number\">600</span><span class=\"token punctuation\">,</span> <span class=\"token number\">600</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\ncv2<span class=\"token punctuation\">.</span>imshow<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Transformed&quot;</span><span class=\"token punctuation\">,</span> result<span class=\"token punctuation\">)</span>\n</pre></li>\n<li>\n<ol start=\"2\">\n<li>We <strong>obtain</strong> our 2x3 transformation matrix <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>M</mi></mrow><annotation encoding=\"application/x-tex\">M</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.10903em;\">M</span></span></span></span> by deriving the geometric relation using three points. Three points form a triangle, which is the minimal case required to find the affine transformation before applying the transformation to the whole image.</li>\n</ol>\n<ul>\n<li>Example code:</li>\n</ul>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">img <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;our_image.png&quot;</span><span class=\"token punctuation\">)</span>\ncoords_s <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>float32<span class=\"token punctuation\">(</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">[</span><span class=\"token number\">10</span><span class=\"token punctuation\">,</span> <span class=\"token number\">10</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">[</span><span class=\"token number\">80</span><span class=\"token punctuation\">,</span> <span class=\"token number\">10</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">[</span><span class=\"token number\">10</span><span class=\"token punctuation\">,</span> <span class=\"token number\">80</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\ncoords_d <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>float32<span class=\"token punctuation\">(</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">[</span><span class=\"token number\">10</span><span class=\"token punctuation\">,</span> <span class=\"token number\">10</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">[</span><span class=\"token number\">95</span><span class=\"token punctuation\">,</span> <span class=\"token number\">10</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">[</span><span class=\"token number\">10</span><span class=\"token punctuation\">,</span> <span class=\"token number\">80</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\nmat <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>getAffineTransform<span class=\"token punctuation\">(</span>src<span class=\"token operator\">=</span>coords_s<span class=\"token punctuation\">,</span> dst<span class=\"token operator\">=</span>coords_d<span class=\"token punctuation\">)</span>\nresult <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>warpAffine<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> M<span class=\"token operator\">=</span>mat<span class=\"token punctuation\">,</span> dsize<span class=\"token operator\">=</span><span class=\"token punctuation\">(</span><span class=\"token number\">200</span><span class=\"token punctuation\">,</span> <span class=\"token number\">200</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\ncv2<span class=\"token punctuation\">.</span>imshow<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Transformed&quot;</span><span class=\"token punctuation\">,</span> result<span class=\"token punctuation\">)</span>\n</pre><p>Have we printed out <code>mat</code> from the snippet of code above, we would see a 2x3 matrix that looks like this:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\"><span class=\"token punctuation\">[</span><span class=\"token punctuation\">[</span> <span class=\"token number\">1.21428571</span>  <span class=\"token number\">0</span><span class=\"token punctuation\">.</span>         <span class=\"token operator\">-</span><span class=\"token number\">2.14285714</span><span class=\"token punctuation\">]</span>\n <span class=\"token punctuation\">[</span> <span class=\"token number\">0</span><span class=\"token punctuation\">.</span>          <span class=\"token number\">1</span><span class=\"token punctuation\">.</span>          <span class=\"token number\">0</span><span class=\"token punctuation\">.</span>        <span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span>\n</pre></li>\n<li>\n<p>2b <em>[Optional]</em>. As an extension to point (2) above, consider how we would use <code>cv2.warpAffine</code> to achieve a 90 degree clockwise rotation. If you have attended my Unsupervised Learning course from the Machine Learning Specialization, you will undoubtedly have seen this quick reference:<br>\n<img src=\"assets/rotationmatrix.gif\" alt></p>\n<p>To plug that directly into the <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>A</mi></mrow><annotation encoding=\"application/x-tex\">A</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\">A</span></span></span></span> of our original formula:<br>\n<span class=\"katex-display\"><span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>T</mi><mo>=</mo><mi>A</mi><mo>&#x22C5;</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>y</mi></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow><mo>+</mo><mi>B</mi></mrow><annotation encoding=\"application/x-tex\">T = A \\cdot \\begin{bmatrix} x \\\\ y \\end{bmatrix} + B</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.13889em;\">T</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\">A</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">&#x22C5;</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">x</span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\" style=\"margin-right:0.03588em;\">y</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span><span class=\"mbin\">+</span><span class=\"mspace\" style=\"margin-right:0.2222222222222222em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.05017em;\">B</span></span></span></span></span></p>\n<p>A 90-degree clockwise rotation could be implemented as a 270-degree anti-clockwise rotation. Let&apos;s see this implementation in <code>opencv</code>:</p>\n<ul>\n<li>Example code:</li>\n</ul>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">img <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;assets/cvess.png&quot;</span><span class=\"token punctuation\">)</span>\n<span class=\"token punctuation\">(</span>h<span class=\"token punctuation\">,</span> w<span class=\"token punctuation\">)</span> <span class=\"token operator\">=</span> img<span class=\"token punctuation\">.</span>shape<span class=\"token punctuation\">[</span><span class=\"token punctuation\">:</span><span class=\"token number\">2</span><span class=\"token punctuation\">]</span>\ncenter <span class=\"token operator\">=</span> <span class=\"token punctuation\">(</span>w <span class=\"token operator\">//</span> <span class=\"token number\">2</span><span class=\"token punctuation\">,</span> h <span class=\"token operator\">//</span> <span class=\"token number\">2</span><span class=\"token punctuation\">)</span>\nmat3 <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>getRotationMatrix2D<span class=\"token punctuation\">(</span>center<span class=\"token punctuation\">,</span> angle<span class=\"token operator\">=</span><span class=\"token number\">270</span><span class=\"token punctuation\">,</span> scale<span class=\"token operator\">=</span><span class=\"token number\">1</span><span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span><span class=\"token string-interpolation\"><span class=\"token string\">f&apos;270 degree anti-clockwise: \\n </span><span class=\"token interpolation\"><span class=\"token punctuation\">{</span>np<span class=\"token punctuation\">.</span><span class=\"token builtin\">round</span><span class=\"token punctuation\">(</span>mat3<span class=\"token punctuation\">,</span> <span class=\"token number\">2</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">}</span></span><span class=\"token string\">&apos;</span></span><span class=\"token punctuation\">)</span>\nrotated <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>warpAffine<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> mat<span class=\"token punctuation\">,</span> <span class=\"token punctuation\">(</span>w<span class=\"token punctuation\">,</span> h<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\ncv2<span class=\"token punctuation\">.</span>imshow<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Rotated&quot;</span><span class=\"token punctuation\">,</span> rotated<span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># </span>\n<span class=\"token comment\"># print output:</span>\n<span class=\"token comment\"># </span>\n<span class=\"token comment\"># 270 degree anti-clockwise: </span>\n<span class=\"token comment\"># [[ -0.  -1. 400.]</span>\n<span class=\"token comment\"># [  1.  -0.   0.]] </span>\n</pre><p>We learned earlier that:<br>\n<span class=\"katex-display\"><span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>M</mi><mo>=</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>A</mi></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><mi>B</mi></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow><mo>=</mo><mrow><mo fence=\"true\">[</mo><mtable rowspacing=\"0.15999999999999992em\" columnspacing=\"1em\"><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>00</mn></msub></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>01</mn></msub></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>b</mi><mn>00</mn></msub></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>10</mn></msub></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>a</mi><mn>11</mn></msub></mstyle></mtd><mtd><mstyle scriptlevel=\"0\" displaystyle=\"false\"><msub><mi>b</mi><mn>10</mn></msub></mstyle></mtd></mtr></mtable><mo fence=\"true\">]</mo></mrow></mrow><annotation encoding=\"application/x-tex\">M = \\begin{bmatrix} A &amp; B \\end{bmatrix} = \\begin{bmatrix} a_{00} &amp; a_{01} &amp; b_{00} \\\\  a_{10} &amp; a_{11} &amp; b_{10} \\end{bmatrix}</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.10903em;\">M</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:1.20001em;vertical-align:-0.35001em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size1\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.8500000000000001em;\"><span style=\"top:-3.01em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\">A</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.35000000000000003em;\"><span></span></span></span></span></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.8500000000000001em;\"><span style=\"top:-3.01em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord mathdefault\" style=\"margin-right:0.05017em;\">B</span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.35000000000000003em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size1\">]</span></span></span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span><span class=\"mrel\">=</span><span class=\"mspace\" style=\"margin-right:0.2777777777777778em;\"></span></span><span class=\"base\"><span class=\"strut\" style=\"height:2.40003em;vertical-align:-0.95003em;\"></span><span class=\"minner\"><span class=\"mopen delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">[</span></span><span class=\"mord\"><span class=\"mtable\"><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">1</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">a</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">1</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"arraycolsep\" style=\"width:0.5em;\"></span><span class=\"col-align-c\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:1.45em;\"><span style=\"top:-3.61em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">b</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">0</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span><span style=\"top:-2.4099999999999997em;\"><span class=\"pstrut\" style=\"height:3em;\"></span><span class=\"mord\"><span class=\"mord\"><span class=\"mord mathdefault\">b</span><span class=\"msupsub\"><span class=\"vlist-t vlist-t2\"><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.30110799999999993em;\"><span style=\"top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;\"><span class=\"pstrut\" style=\"height:2.7em;\"></span><span class=\"sizing reset-size6 size3 mtight\"><span class=\"mord mtight\"><span class=\"mord mtight\">1</span><span class=\"mord mtight\">0</span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.15em;\"><span></span></span></span></span></span></span></span></span></span><span class=\"vlist-s\">&#x200B;</span></span><span class=\"vlist-r\"><span class=\"vlist\" style=\"height:0.9500000000000004em;\"><span></span></span></span></span></span></span></span><span class=\"mclose delimcenter\" style=\"top:0em;\"><span class=\"delimsizing size3\">]</span></span></span></span></span></span></span></p>\n<p>So <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>A</mi></mrow><annotation encoding=\"application/x-tex\">A</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\">A</span></span></span></span> would be the <code>[[0, -1], [1, 0]]</code> and <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>B</mi></mrow><annotation encoding=\"application/x-tex\">B</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.05017em;\">B</span></span></span></span> would be <code>[400, 0]</code>. Fundamentally, the <code>cv2.getRotationMatrix2D</code> is still applying an affine transformation to map the pixels from one point to another using a 2x3 matrix.</p>\n<ul>\n<li>Skeptical and want further mathematical proof?\n<ul>\n<li>Hop to the <strong>Trigonometry Proof</strong> section.</li>\n</ul>\n</li>\n<li>Want to experiment?\n<ul>\n<li>Modify the script in <code>rotate_01.py</code> to obtain <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>M</mi></mrow><annotation encoding=\"application/x-tex\">M</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.10903em;\">M</span></span></span></span> for a 180-degree rotation, and a 30-degree counter-clockwise rotation</li>\n</ul>\n</li>\n</ul>\n</li>\n</ul>\n<h3 class=\"mume-header\" id=\"dive-deeper\">Dive Deeper</h3>\n\n<p>Let&apos;s also look at another application of <code>getAffineTransform</code> to strengthen our understanding.</p>\n<p>Supposed we specify <span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>M</mi></mrow><annotation encoding=\"application/x-tex\">M</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.10903em;\">M</span></span></span></span> to be <code>mat = np.float32([[1, 0, 0], [0, 1, 0]])</code>, what do you expect the transformation to be?</p>\n<p>Take a minute to discuss with your classmates or refer back to the Mathematical Definitions section above and try to internalize this before moving forward.</p>\n<p>To verify your answer, run <code>scale_03.py</code> and see if your hunch was right.</p>\n<p>For an extra challenge, let&apos;s assume <code>our_image.png</code> is an image of 200x200. Pay attention to the specification of <code>mat</code> (<span class=\"katex\"><span class=\"katex-mathml\"><math xmlns=\"http://www.w3.org/1998/Math/MathML\"><semantics><mrow><mi>M</mi></mrow><annotation encoding=\"application/x-tex\">M</annotation></semantics></math></span><span class=\"katex-html\" aria-hidden=\"true\"><span class=\"base\"><span class=\"strut\" style=\"height:0.68333em;vertical-align:0em;\"></span><span class=\"mord mathdefault\" style=\"margin-right:0.10903em;\">M</span></span></span></span>), what do you expect the outcome <code>result</code> to be?</p>\n<p>Take a minute to discuss before moving forward.</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">img <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>imread<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;assets/our_image.png&quot;</span><span class=\"token punctuation\">)</span>\ncv2<span class=\"token punctuation\">.</span>imshow<span class=\"token punctuation\">(</span><span class=\"token string\">&quot;Original&quot;</span><span class=\"token punctuation\">,</span> img<span class=\"token punctuation\">)</span>\n\n<span class=\"token comment\"># custom transformation matrix</span>\nmat <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>float32<span class=\"token punctuation\">(</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">[</span><span class=\"token number\">3</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token number\">3</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\n<span class=\"token keyword\">print</span><span class=\"token punctuation\">(</span>mat<span class=\"token punctuation\">)</span>\nresult <span class=\"token operator\">=</span> cv2<span class=\"token punctuation\">.</span>warpAffine<span class=\"token punctuation\">(</span>img<span class=\"token punctuation\">,</span> M<span class=\"token operator\">=</span>mat<span class=\"token punctuation\">,</span> dsize<span class=\"token operator\">=</span><span class=\"token punctuation\">(</span><span class=\"token number\">200</span><span class=\"token punctuation\">,</span> <span class=\"token number\">200</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span>\n</pre><p>You may have expected the 2x3 matrix <code>mat</code> to have a scaling effect on our original image. However, the required argument of <code>dsize</code> in our <code>warpAffine()</code> call constrained the output to its original dimension, 200x200, thus &quot;cropping out&quot; only the top left corner of the image.</p>\n<p>Supposed we&apos;ll like to see the transformed image (scaled by 3x) in its entirety, how would we have changed the value passed to the <code>dsize</code> argument?</p>\n<p>Refer to <code>scale_04.py</code> to verify that you&apos;ve got this right.</p>\n<h4 class=\"mume-header\" id=\"trigonometry-proof\">Trigonometry Proof</h4>\n\n<p><em>This section is optional; you may choose to skip this section.</em></p>\n<ul>\n<li class=\"task-list-item\">\n<p><input type=\"checkbox\" class=\"task-list-item-checkbox\"> <a href=\"https://www.youtube.com/watch?v=tIixrNtLJ8U\">Watch Rotation Matrix Explained Visually </a></p>\n  <iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/pWfXR_HmyUw\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n<ul>\n<li><a href=\"https://www.youtube.com/watch?v=pWfXR_HmyUw\">Bahasa Indonesia voiceover</a> is also available</li>\n</ul>\n</li>\n</ul>\n<p>If you&apos;re done watching the video, see the same example being presented in code:</p>\n<pre data-role=\"codeBlock\" data-info=\"py\" class=\"language-python\">a <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>float32<span class=\"token punctuation\">(</span><span class=\"token punctuation\">[</span><span class=\"token punctuation\">[</span><span class=\"token number\">0</span><span class=\"token punctuation\">,</span> <span class=\"token operator\">-</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> <span class=\"token punctuation\">[</span><span class=\"token number\">1</span><span class=\"token punctuation\">,</span> <span class=\"token number\">0</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\nx <span class=\"token operator\">=</span> np<span class=\"token punctuation\">.</span>float32<span class=\"token punctuation\">(</span><span class=\"token punctuation\">[</span><span class=\"token number\">3</span><span class=\"token punctuation\">,</span> <span class=\"token number\">6</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">)</span>\nnp<span class=\"token punctuation\">.</span>matmul<span class=\"token punctuation\">(</span>a<span class=\"token punctuation\">,</span> x<span class=\"token punctuation\">)</span>\n<span class=\"token comment\"># output:</span>\n<span class=\"token comment\"># array([-6.,  3.], dtype=float32)</span>\n</pre><h2 class=\"mume-header\" id=\"code-illustrations\">Code Illustrations</h2>\n\n<ul>\n<li>Code example of using <code>getRotationMatrix2D()</code> to get a 2x3 matrix: <strong><code>rotate_01.py</code></strong></li>\n<li>Code example of using three points to <code>getAffineTransform()</code>, obtaining a 2x3 matrix of <code>[[1,0,0], [0,1,0]]</code> (no transformation): <code>scale_01.py</code></li>\n<li>Code example of explicit specification for our 2x3 matrix using <code>np.float32([[1,0,0], [0,1,0]])</code>: <strong><code>scale_02.py</code></strong></li>\n<li>Code example of setting the <code>dsize</code> parameter in <code>cv2.warpAffine</code> without transformation: <strong><code>scale_03.py</code></strong></li>\n<li>Code example of a scale transformation and setting the <code>dsize</code> parameter accordingly: <strong><code>scale_04.py</code></strong></li>\n<li>Code example of using three points to <code>getAffineTransform()</code>, obtaining a 2x3 matrix of <code>[[1,0,0], [0,1,0]]</code>: <strong><code>scale_05.py</code></strong></li>\n<li>Code example of translating (shifting an image) using a 2x3 matrix: <strong><code>translate_01.py</code></strong></li>\n</ul>\n<h2 class=\"mume-header\" id=\"summary-and-key-points\">Summary and Key Points</h2>\n\n<ol>\n<li>\n<p>Images from imaging systems and capturing systems are often &quot;subject to geometric distortion introduced by perspective irregularities&quot;<sup class=\"footnote-ref\"><a href=\"#fn1\" id=\"fnref1\">[1]</a></sup> or &quot;deformations that occur with non-ideal camera angles&quot;<sup class=\"footnote-ref\"><a href=\"#fn2\" id=\"fnref2\">[2]</a></sup>.</p>\n</li>\n<li>\n<p>In the case of translation or scaling, we typically specify our 2x3 matrix using <code>np.float()</code> and feed this matrix to <code>cv2.warpAffine()</code></p>\n</li>\n<li>\n<p>In the case of rotation, we typically use the convenience function <code>cv2.getAffineTransform()</code> to obtain the 2x3 matrix before feeding it to <code>cv2.warpAffine()</code></p>\n</li>\n</ol>\n<blockquote>\n<p><code>cv2.getAffineTransform(src, dst)</code></p>\n<p><strong>Parameters:</strong></p>\n<ul>\n<li><strong>src</strong> - Coordinates of triangle vertices in the source image</li>\n<li><strong>dst</strong> - Coordinates of corresponding triangle vertices in the destination triange</li>\n</ul>\n</blockquote>\n<h2 class=\"mume-header\" id=\"learn-by-building\">Learn-by-Building</h2>\n\n<p>In the <code>homework</code> directory, you&apos;ll find a digital map <code>belitung_raw.jpg</code>. Your job is to apply what you&apos;ve learned in this lesson to restore the map by correcting its skew and resize it appropriately.</p>\n<p><img src=\"assets/hw1_belitung.png\" alt></p>\n<h2 class=\"mume-header\" id=\"references\">References</h2>\n\n<hr class=\"footnotes-sep\">\n<section class=\"footnotes\">\n<ol class=\"footnotes-list\">\n<li id=\"fn1\" class=\"footnote-item\"><p>Fisher, R., Perkins, S., Walker, A., Wolfart, E., Hypermedia Image Processing Learning (HIPR2) Resources, 2003 <a href=\"#fnref1\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n<li id=\"fn2\" class=\"footnote-item\"><p><a href=\"https://www.mathworks.com/discovery/affine-transformation.html\">MathWorks</a>, Linear mapping method using affine transformation, Affine Transformation <a href=\"#fnref2\" class=\"footnote-backref\">&#x21A9;&#xFE0E;</a></p>\n</li>\n</ol>\n</section>\n</div>\n      </div>\n      <div class=\"md-sidebar-toc\"><ul>\n<li><a href=\"#affine-transformation\">Affine Transformation</a>\n<ul>\n<li><a href=\"#definition\">Definition</a>\n<ul>\n<li><a href=\"#mathematical-definitions\">Mathematical Definitions</a>\n<ul>\n<li><a href=\"#practical-examples\">Practical Examples</a></li>\n</ul>\n</li>\n</ul>\n</li>\n<li><a href=\"#motivation\">Motivation</a></li>\n<li><a href=\"#getting-affine-transformation\">Getting Affine Transformation</a>\n<ul>\n<li><a href=\"#dive-deeper\">Dive Deeper</a>\n<ul>\n<li><a href=\"#trigonometry-proof\">Trigonometry Proof</a></li>\n</ul>\n</li>\n</ul>\n</li>\n<li><a href=\"#code-illustrations\">Code Illustrations</a></li>\n<li><a href=\"#summary-and-key-points\">Summary and Key Points</a></li>\n<li><a href=\"#learn-by-building\">Learn-by-Building</a></li>\n<li><a href=\"#references\">References</a></li>\n</ul>\n</li>\n</ul>\n</div>\n      <a id=\"sidebar-toc-btn\">&#x2261;</a>\n    \n    \n    \n    \n    \n    \n    \n    \n<script>\n\nvar sidebarTOCBtn = document.getElementById('sidebar-toc-btn')\nsidebarTOCBtn.addEventListener('click', function(event) {\n  event.stopPropagation()\n  if (document.body.hasAttribute('html-show-sidebar-toc')) {\n    document.body.removeAttribute('html-show-sidebar-toc')\n  } else {\n    document.body.setAttribute('html-show-sidebar-toc', true)\n  }\n})\n</script>\n      \n  \n    </body></html>"
  },
  {
    "path": "transformation/lecture_affine.md",
    "content": "# Affine Transformation\n\n## Definition\nAny transformation that can be expressed in the form of a _matrix multiplication_ (linear transformation) followed by a _vector addition_ (translation). \n\n$$T = A \\cdot \\begin{bmatrix} x \\\\ y \\end{bmatrix} + B$$\n\nIn which:\n\n$$A = \\begin{bmatrix} a_{00} & a_{01} \\\\ a_{10} & a_{11} \\end{bmatrix};   B = \\begin{bmatrix} b_{00} \\\\ b_{10} \\end{bmatrix}$$\n\nWhen concatenated horizontally, this can be expressed in a larger Matrix:\n\n$$M = \\begin{bmatrix} A & B \\end{bmatrix} = \\begin{bmatrix} a_{00} & a_{01} & b_{00} \\\\  a_{10} & a_{11} & b_{10} \\end{bmatrix}$$\n\nBy the definition above (_matmul_ + _vector addition_), affine transformation can be used to achieve:\n- Scaling (linear transformation)\n- Rotations (linear transformation)\n- Translations (vector additions)\n\nAffine transformation preserves points, straight lines, and planes. Parallel lines will remain parallel. It does not however preserve the distance and angles between points.\n\nWe represent an Affine Transformation using a **2x3 matrix**.\n\n### Mathematical Definitions\nConsider the goal of transforming a 2D vector $X = \\begin{bmatrix} x \\\\ y \\end{bmatrix}$ using $A$ and $B$ to obtain $T$, we can do it like such:\n\n$$T = A \\cdot \\begin{bmatrix} x \\\\ y \\end{bmatrix} + B$$ \n\nOr equivalently:\n\n$$T = M \\cdot [x,y,1]^T = \\begin{bmatrix} \na_{00}x + a_{01}y + b_{00} \\\\ a_{10}x + a_{11}y + b_{10}  \\end{bmatrix}$$\n\n#### Practical Examples\nIn `scale_04.py` from the **Examples and Illustrations** section, you'll see that the  2x3 matrix $M$ is simply defined as such:\n`np.float32([[3, 0, 0], [0, 3, 0]])`\n\nThe code above is equivalent to the one below:\n```py\nx = np.array([[3, 0, 0], [0, 3, 0]], dtype='float32')\n# alternative:\nx = np.array([[3, 0, 0], [0, 3, 0]])\nx = x.astype('float32')\nx.dtype # dtype('float32')\n```\n\nWhen you explicitly specify a 2x3 matrix, think of the first two columns as the $A$ component, or the matrix-multiplication process. The third column, naturally, represents the $B$ component, or the vector addition process. This may sound a little abstract, so I encourage you to pause and take a look at the code below:\n```py\n(h, w) = img.shape[:2]\nmat = np.float32([[1, 0, -140], [0, 1, 20]])\ntranslated = cv2.warpAffine(img, mat, (w, h))\ncv2.imshow(\"Translated\", translated)\n```\n\nNotice that our $A$ is an _identity matrix_ of size 2. An identity matrix is the matrix equivalent of a scalar 1. Multiplying a matrix by its identity matrix doesn't change it by anything. \n\n$$T  = \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix}  \\cdot \\begin{bmatrix} x \\\\ y \\end{bmatrix} + \\begin{bmatrix} -140 \\\\ 20 \\end{bmatrix}$$\n\nWhich leads to:\n$$T  = \\begin{bmatrix} 1 \\cdot x + 0 \\cdot y -140 \\\\ 0 \\cdot x + 1 \\cdot y + 20 \\end{bmatrix}$$\n\nAnd our $B$, the vector addition component, moves each pixel -- or more formally, translate each pixel -- on the image by -140 in the $x$ direction and 20 on the $y$ direction. Find the full code example on `translate_01.py`.\n\n\n## Motivation\n1. Imaging systems in the real-world are often subject to **geometric distortion**. The distortion may be introduced by perspective irregularities, physical constraints (e.g camera placements), or other reasons. \n\n2. In the field of GIS (geographic information systems), routinely one would use affine transformation to \"convert\" geographic coordinates into screen coordinates such that it can **be displayed and presented** on our handheld / navigational devices. \n\n3. One may also overlay coordinate data on a spatial data that reference a different coordinate systems; Or to **\"stitch\" together** different sources of data using a series of transformation\n\nThese are but a handful of examples where one may expect to see routine use of affine transformations. If you're spending any amount of time in computer vision, a high degree of familiarity with these remapping routines in OpenCV will come in very handy.\n\nIn your learn-by-building section, you will find a less-than-perfectly-digitalized map, `belitung_raw.jpg`. Your job is to apply what you've apply the necessary affine transformation to correct its perspective distortion and the resize the map accordingly.\n\n## Getting Affine Transformation \nGiven the importance of such a relation between two images, it should come as no surprise that `opencv` packs a number of convenience methods to help us specify this transformation. The two common use-cases are:\n- 1. We **specify** our 2D vector representing the original image, $X$ and our 2x3 transformation matrix $M$ constructed in `numpy`.\n    - Example code: \n    ```py\n    img = cv2.imread(\"our_image.png\")\n    mat = np.float32([[3, 0, 0], [0, 3, 0]])\n    result = cv2.warpAffine(img, M=mat, dsize=(600, 600))\n    cv2.imshow(\"Transformed\", result)\n    ```\n\n- 2.  We **obtain** our 2x3 transformation matrix $M$ by deriving the geometric relation using three points. Three points form a triangle, which is the minimal case required to find the affine transformation before applying the transformation to the whole image.\n    - Example code: \n    ```py\n    img = cv2.imread(\"our_image.png\")\n    coords_s = np.float32([[10, 10], [80, 10], [10, 80]])\n    coords_d = np.float32([[10, 10], [95, 10], [10, 80]])\n    mat = cv2.getAffineTransform(src=coords_s, dst=coords_d)\n    result = cv2.warpAffine(img, M=mat, dsize=(200, 200))\n    cv2.imshow(\"Transformed\", result)\n    ```\n    Have we printed out `mat` from the snippet of code above, we would see a 2x3 matrix that looks like this:\n    ```py\n    [[ 1.21428571  0.         -2.14285714]\n     [ 0.          1.          0.        ]]\n    ```\n\n- 2b _[Optional]_. As an extension to point (2) above, consider how we would use `cv2.warpAffine` to achieve a 90 degree clockwise rotation. If you have attended my Unsupervised Learning course from the Machine Learning Specialization, you will undoubtedly have seen this quick reference:\n    ![](assets/rotationmatrix.gif) \n\n    To plug that directly into the $A$ of our original formula:\n    $$T = A \\cdot \\begin{bmatrix} x \\\\ y \\end{bmatrix} + B$$\n\n    A 90-degree clockwise rotation could be implemented as a 270-degree anti-clockwise rotation. Let's see this implementation in `opencv`:\n\n    - Example code: \n    ```py\n    img = cv2.imread(\"assets/cvess.png\")\n    (h, w) = img.shape[:2]\n    center = (w // 2, h // 2)\n    mat3 = cv2.getRotationMatrix2D(center, angle=270, scale=1)\n    print(f'270 degree anti-clockwise: \\n {np.round(mat3, 2)}')\n    rotated = cv2.warpAffine(img, mat, (w, h))\n    cv2.imshow(\"Rotated\", rotated)\n    # \n    # print output:\n    # \n    # 270 degree anti-clockwise: \n    # [[ -0.  -1. 400.]\n    # [  1.  -0.   0.]] \n    ```\n\n    We learned earlier that:\n    $$M = \\begin{bmatrix} A & B \\end{bmatrix} = \\begin{bmatrix} a_{00} & a_{01} & b_{00} \\\\  a_{10} & a_{11} & b_{10} \\end{bmatrix}$$\n\n    So $A$ would be the `[[0, -1], [1, 0]]` and $B$ would be `[400, 0]`. Fundamentally, the `cv2.getRotationMatrix2D` is still applying an affine transformation to map the pixels from one point to another using a 2x3 matrix.\n\n    - Skeptical and want further mathematical proof? \n        - Hop to the **Trigonometry Proof** section. \n    - Want to experiment? \n        - Modify the script in `rotate_01.py` to obtain $M$ for a 180-degree rotation, and a 30-degree counter-clockwise rotation\n\n### Dive Deeper\n\nLet's also look at another application of `getAffineTransform` to strengthen our understanding. \n\nSupposed we specify $M$ to be `mat = np.float32([[1, 0, 0], [0, 1, 0]])`, what do you expect the transformation to be? \n\nTake a minute to discuss with your classmates or refer back to the Mathematical Definitions section above and try to internalize this before moving forward.\n\nTo verify your answer, run `scale_03.py` and see if your hunch was right.\n\nFor an extra challenge, let's assume `our_image.png` is an image of 200x200. Pay attention to the specification of `mat` ($M$), what do you expect the outcome `result` to be? \n\nTake a minute to discuss before moving forward.\n\n```py\nimg = cv2.imread(\"assets/our_image.png\")\ncv2.imshow(\"Original\", img)\n\n# custom transformation matrix\nmat = np.float32([[3, 0, 0], [0, 3, 0]])\nprint(mat)\nresult = cv2.warpAffine(img, M=mat, dsize=(200, 200))\n```\n\nYou may have expected the 2x3 matrix `mat` to have a scaling effect on our original image. However, the required argument of `dsize` in our `warpAffine()` call constrained the output to its original dimension, 200x200, thus \"cropping out\" only the top left corner of the image. \n\nSupposed we'll like to see the transformed image (scaled by 3x) in its entirety, how would we have changed the value passed to the `dsize` argument? \n\nRefer to `scale_04.py` to verify that you've got this right.\n\n#### Trigonometry Proof\n_This section is optional; you may choose to skip this section._\n\n- [ ] [Watch Rotation Matrix Explained Visually](https://www.youtube.com/watch?v=tIixrNtLJ8U)\n    <iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/pWfXR_HmyUw\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>\n\n    - [Bahasa Indonesia voiceover](https://www.youtube.com/watch?v=pWfXR_HmyUw) is also available\n\nIf you're done watching the video, see the same example being presented in code:\n```py\na = np.float32([[0, -1], [1, 0]])\nx = np.float32([3, 6])\nnp.matmul(a, x)\n# output:\n# array([-6.,  3.], dtype=float32)\n```\n\n## Code Illustrations\n- Code example of using `getRotationMatrix2D()` to get a 2x3 matrix: **`rotate_01.py`**  \n- Code example of using three points to `getAffineTransform()`, obtaining a 2x3 matrix of `[[1,0,0], [0,1,0]]` (no transformation): `scale_01.py`\n- Code example of explicit specification for our 2x3 matrix using `np.float32([[1,0,0], [0,1,0]])`: **`scale_02.py`**\n- Code example of setting the `dsize` parameter in `cv2.warpAffine` without transformation: **`scale_03.py`**\n- Code example of a scale transformation and setting the `dsize` parameter accordingly: **`scale_04.py`**\n- Code example of using three points to `getAffineTransform()`, obtaining a 2x3 matrix of `[[1,0,0], [0,1,0]]`: **`scale_05.py`**  \n- Code example of translating (shifting an image) using a 2x3 matrix: **`translate_01.py`**\n\n## Summary and Key Points\n1. Images from imaging systems and capturing systems are often \"subject to geometric distortion introduced by perspective irregularities\"[^1] or \"deformations that occur with non-ideal camera angles\"[^2].  \n\n2. In the case of translation or scaling, we typically specify our 2x3 matrix using `np.float()` and feed this matrix to `cv2.warpAffine()`  \n\n3. In the case of rotation, we typically use the convenience function `cv2.getAffineTransform()` to obtain the 2x3 matrix before feeding it to `cv2.warpAffine()`\n\n> `cv2.getAffineTransform(src, dst)`\n>\n> **Parameters:**\n> - **src** - Coordinates of triangle vertices in the source image\n> - **dst** - Coordinates of corresponding triangle vertices in the destination triange\n\n\n## Learn-by-Building\nIn the `homework` directory, you'll find a digital map `belitung_raw.jpg`. Your job is to apply what you've learned in this lesson to restore the map by correcting its skew and resize it appropriately. \n\n![](assets/hw1_belitung.png)\n\n\n\n## References\n[^1]: Fisher, R., Perkins, S., Walker, A., Wolfart, E., Hypermedia Image Processing Learning (HIPR2) Resources, 2003\n\n[^2]: [MathWorks](https://www.mathworks.com/discovery/affine-transformation.html), Linear mapping method using affine transformation, Affine Transformation"
  },
  {
    "path": "transformation/rotate_01.py",
    "content": "import numpy as np\nimport cv2\n\nimg = cv2.imread(\"assets/cvess.png\")\ncv2.imshow(\"Original\", img)\ncv2.waitKey(0)\n(h, w) = img.shape[:2]\n\ncenter = (w // 2, h // 2)\n# getRotationMatrix2D creates our 2x3 matrix\nmat = cv2.getRotationMatrix2D(center, angle=270, scale=1)\nprint(f'270 degree anti-clockwise: \\n {np.round(mat, 2)}')\nrotated = cv2.warpAffine(img, mat, (w, h))\ncv2.imshow(\"Rotated\", rotated)\ncv2.waitKey(0)"
  },
  {
    "path": "transformation/scale_01.py",
    "content": "import numpy as np\nimport cv2\n\nimg = cv2.imread(\"assets/corgi.png\")\ncv2.circle(img, (10, 10), 4, (0, 255, 255), -1)\ncv2.circle(img, (80, 10), 4, (0, 255, 255), -1)\ncv2.circle(img, (10, 80), 4, (0, 255, 255), -1)\ncv2.imshow(\"Original\", img)\n\ncoords_s = np.float32([[10, 10], [80, 10], [10, 80]])\ncoords_d = np.float32([[10, 10], [80, 10], [10, 80]])\n\n# getAffineTransform creates our 2x3 matrix\nmat = cv2.getAffineTransform(src=coords_s, dst=coords_d)\nprint(mat)\nresult = cv2.warpAffine(img, M=mat, dsize=(200, 200))\ncv2.imshow(\"Warped\", result)\n\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n"
  },
  {
    "path": "transformation/scale_02.py",
    "content": "import numpy as np\nimport cv2\n\nimg = cv2.imread(\"assets/corgi.png\")\ncv2.circle(img, (10, 10), 4, (255, 0, 0), -1)\ncv2.circle(img, (80, 10), 4, (0, 255, 0), -1)\ncv2.circle(img, (10, 80), 4, (0, 0, 255), -1)\ncv2.imshow(\"Original\", img)\n\n# custom transformation matrix\nmat = np.float32([[1, 0, 0], [0, 1, 0]])\nprint(mat)\nresult = cv2.warpAffine(img, M=mat, dsize=(200, 200))\ncv2.imshow(\"Warped\", result)\n\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n"
  },
  {
    "path": "transformation/scale_03.py",
    "content": "import numpy as np\nimport cv2\n\nimg = cv2.imread(\"assets/corgi.png\")\ncv2.imshow(\"Original\", img)\n\n# custom transformation matrix\nmat = np.float32([[1, 0, 0], [0, 1, 0]])\nprint(mat)\nresult = cv2.warpAffine(img, M=mat, dsize=(600, 600))\ncv2.imshow(\"600x600\", result)\n\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n"
  },
  {
    "path": "transformation/scale_04.py",
    "content": "import numpy as np\nimport cv2\n\nimg = cv2.imread(\"assets/corgi.png\")\ncv2.imshow(\"Original\", img)\n\n# custom transformation matrix\nmat = np.float32([[3, 0, 0], [0, 3, 0]])\nprint(mat)\nresult = cv2.warpAffine(img, M=mat, dsize=(600, 600))\ncv2.imshow(\"600x600 with Scaling\", result)\n\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n"
  },
  {
    "path": "transformation/scale_05.py",
    "content": "import numpy as np\nimport cv2\n\nimg = cv2.imread(\"assets/corgi.png\")\ncv2.circle(img, (10, 10), 4, (0, 255, 255), -1)\ncv2.circle(img, (80, 10), 4, (0, 255, 255), -1)\ncv2.circle(img, (10, 80), 4, (0, 255, 255), -1)\ncv2.imshow(\"Original\", img)\n\ncoords_s = np.float32([[10, 10], [80, 10], [10, 80]])\ncoords_d = np.float32([[10, 10], [95, 10], [10, 80]])\n\n# getAffineTransform creates our 2x3 matrix\nmat = cv2.getAffineTransform(src=coords_s, dst=coords_d)\nprint(mat)\nresult = cv2.warpAffine(img, M=mat, dsize=(200, 200))\ncv2.imshow(\"Warped\", result)\n\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n"
  },
  {
    "path": "transformation/translate_01.py",
    "content": "import numpy as np\nimport cv2\n\nimg = cv2.imread(\"assets/cvess.png\")\ncv2.imshow(\"Original\", img)\ncv2.waitKey(0)\n(h, w) = img.shape[:2]\n\n# Specify our 2x3 matrix\nmat = np.float32([[1, 0, -140], [0, 1, 20]])\ntranslated = cv2.warpAffine(img, mat, (w, h))\ncv2.imshow(\"Translated\", translated)\ncv2.waitKey(0)"
  }
]