Showing preview only (6,591K chars total). Download the full file or copy to clipboard to get everything.
Repository: StanfordVL/CS131_release
Branch: master
Commit: 11103e407cc5
Files: 100
Total size: 6.2 MB
Directory structure:
gitextract_pn6zgoo8/
├── .gitignore
├── LICENSE
├── README.md
└── spring_2026/
├── README.md
├── hw0_release/
│ ├── README.md
│ ├── hw0.ipynb
│ └── requirements.txt
└── project1_release/
├── README.md
├── option_A/
│ ├── filters.py
│ ├── option_a.ipynb
│ └── option_a_exploration.ipynb
├── option_B/
│ ├── edge.py
│ ├── images/
│ │ ├── gt/
│ │ │ ├── 101.pgm.gtf.pgm
│ │ │ ├── 103.pgm.gtf.pgm
│ │ │ ├── 104.pgm.gtf.pgm
│ │ │ ├── 105.pgm.gtf.pgm
│ │ │ ├── 106.pgm.gtf.pgm
│ │ │ ├── 108.pgm.gtf.pgm
│ │ │ ├── 109.pgm.gtf.pgm
│ │ │ ├── 110.pgm.gtf.pgm
│ │ │ ├── 111.pgm.gtf.pgm
│ │ │ ├── 125.pgm.gtf.pgm
│ │ │ ├── 126.pgm.gtf.pgm
│ │ │ ├── 130.pgm.gtf.pgm
│ │ │ ├── 131.pgm.gtf.pgm
│ │ │ ├── 132.pgm.gtf.pgm
│ │ │ ├── 133.pgm.gtf.pgm
│ │ │ ├── 134.pgm.gtf.pgm
│ │ │ ├── 137.pgm.gtf.pgm
│ │ │ ├── 138.pgm.gtf.pgm
│ │ │ ├── 143.pgm.gtf.pgm
│ │ │ ├── 144.pgm.gtf.pgm
│ │ │ ├── 146.pgm.gtf.pgm
│ │ │ ├── 202.pgm.gtf.pgm
│ │ │ ├── 203.pgm.gtf.pgm
│ │ │ ├── 204.pgm.gtf.pgm
│ │ │ ├── 207.pgm.gtf.pgm
│ │ │ ├── 214.pgm.gtf.pgm
│ │ │ ├── 215.pgm.gtf.pgm
│ │ │ ├── 217.pgm.gtf.pgm
│ │ │ ├── 218.pgm.gtf.pgm
│ │ │ ├── 220.pgm.gtf.pgm
│ │ │ ├── 221.pgm.gtf.pgm
│ │ │ ├── 223.pgm.gtf.pgm
│ │ │ ├── 36.pgm.gtf.pgm
│ │ │ ├── 43.pgm.gtf.pgm
│ │ │ ├── 47.pgm.gtf.pgm
│ │ │ ├── 48.pgm.gtf.pgm
│ │ │ ├── 50.pgm.gtf.pgm
│ │ │ ├── 56.pgm.gtf.pgm
│ │ │ ├── 61.pgm.gtf.pgm
│ │ │ └── 62.pgm.gtf.pgm
│ │ └── objects/
│ │ ├── 101.pgm
│ │ ├── 103.pgm
│ │ ├── 104.pgm
│ │ ├── 105.pgm
│ │ ├── 106.pgm
│ │ ├── 108.pgm
│ │ ├── 109.pgm
│ │ ├── 110.pgm
│ │ ├── 111.pgm
│ │ ├── 125.pgm
│ │ ├── 126.pgm
│ │ ├── 130.pgm
│ │ ├── 131.pgm
│ │ ├── 132.pgm
│ │ ├── 133.pgm
│ │ ├── 134.pgm
│ │ ├── 137.pgm
│ │ ├── 138.pgm
│ │ ├── 143.pgm
│ │ ├── 144.pgm
│ │ ├── 146.pgm
│ │ ├── 202.pgm
│ │ ├── 203.pgm
│ │ ├── 204.pgm
│ │ ├── 207.pgm
│ │ ├── 214.pgm
│ │ ├── 215.pgm
│ │ ├── 217.pgm
│ │ ├── 218.pgm
│ │ ├── 220.pgm
│ │ ├── 221.pgm
│ │ ├── 223.pgm
│ │ ├── 36.pgm
│ │ ├── 43.pgm
│ │ ├── 47.pgm
│ │ ├── 48.pgm
│ │ ├── 50.pgm
│ │ ├── 56.pgm
│ │ ├── 61.pgm
│ │ └── 62.pgm
│ ├── option_b.ipynb
│ ├── option_b_exploration.ipynb
│ └── references/
│ ├── iguana_canny.npy
│ ├── iguana_edge_tracking.npy
│ ├── iguana_non_max_suppressed.npy
│ └── iguana_non_max_suppressed.png.npy
└── option_C/
├── option_c.ipynb
└── option_c_exploration.ipynb
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
.DS_STORE
*.ipynb_checkpoints*
*__pycache__*
winter_2024/project2_release/option_C/option_c_sol.ipynb
winter_2024/project2_release/option_C/motion_sol.py
winter_2024/project2_release/option_D/option_d_sol.ipynb
winter_2024/project2_release/option_D/segmentation_sol.py
================================================
FILE: LICENSE
================================================
COPYRIGHT
Copyright (c) 2018, Ranjay Krishna.
All rights reserved.
Each contributor holds copyright over their respective contributions.
The project versioning (Git) records all such contribution source information.
LICENSE
The MIT License (MIT)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# CS131: Computer Vision Foundations and Applications
This repository contains the released assignments for the [fall 2017](http://vision.stanford.edu/teaching/cs131_fall1718/), [fall 2018](http://vision.stanford.edu/teaching/cs131_fall1819), [fall 2019](http://vision.stanford.edu/teaching/cs131_fall1920), [fall 2020](http://vision.stanford.edu/teaching/cs131_fall2021/), [fall 2021](http://vision.stanford.edu/teaching/cs131_fall2122/), [fall 2022](http://vision.stanford.edu/teaching/cs131_fall2223/), [winter 2024](http://vision.stanford.edu/teaching/cs131_winter2324/), [winter 2025](https://stanford-cs131.github.io/winter2025/), and [spring 2026](https://stanford-cs131.github.io/spring2026/) iterations of CS131, a course at Stanford taught by [Juan Carlos Niebles](http://www.niebles.net), [Adrien Gaidon](https://adriengaidon.com/), and Silvio Savarese.
The assignments cover a wide range of topics in computer vision, including low-level vision, geometry, and visual recognition. See more details in the related branches.
All the homeworks are under the MIT license.
================================================
FILE: spring_2026/README.md
================================================
# CS131 Spring 2026 Homework
Homework 0 (basics): 10%
Homework 1: 10%
Mini-project 1: 15%
Homework 2: 10%
Mini-Project 2: 15%
Final Project: 40%
================================================
FILE: spring_2026/hw0_release/README.md
================================================
# Homework 0
Open the [`hw0.ipynb` colab notebook](https://colab.research.google.com/drive/11-yXOjx04ydgp5OK_16HWoCtXBWbS_Nb?usp=sharing) (ensure you're logged into your Stanford email to access), click "open with colab" and then "copy to drive" to have a copy in your Stanford Google drive, and follow the instructions in the notebook to complete the assignment.
Follow the instructions in [this webpage](https://stanford-cs131.github.io/spring2026/assignments.html) to learn how to turn assignments into Gradescope.
================================================
FILE: spring_2026/hw0_release/hw0.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "YbbOGOmmvF_L"
},
"source": [
"# Homework 0\n",
"In this homework, we will go through basic linear algebra, NumPy, and image manipulation using Python to get everyone on the same page for the prerequisite skills for this class.\n",
"\n",
"One of the aims of this homework assignment is to get you to start getting comfortable searching for useful library functions online. So in many of the functions you will implement, you will have to look up helper functions."
]
},
{
"cell_type": "markdown",
"source": [
"# Setup"
],
"metadata": {
"id": "kFlvkQbd9rpw"
}
},
{
"cell_type": "markdown",
"source": [
"###**Step 1**\n",
"First, run the cells below to clone the `CS131_release` [repo](https://github.com/StanfordVL/CS131_release) and `cd` into the correct directory in order to access some necessary files.\n",
"\n",
"\n"
],
"metadata": {
"id": "dWteGLJ-HtzG"
}
},
{
"cell_type": "code",
"source": [
"import os\n",
"\n",
"if not os.path.exists(\"CS131_release\"):\n",
" # Clone the repository if it doesn't already exist\n",
" !git clone https://github.com/StanfordVL/CS131_release.git"
],
"metadata": {
"id": "_qzk4-8_kB5W"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"%cd CS131_release/spring_2026/hw0_release/"
],
"metadata": {
"id": "VkEPqnSSkFax"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"###**Step 2**\n",
"Next, run the cells below to install the necessary libraries and packages."
],
"metadata": {
"id": "p-M-lvN5L6Or"
}
},
{
"cell_type": "code",
"source": [
"# Install the necessary dependencies\n",
"# (restart your runtime session if prompted to, and then re-run this cell)\n",
"\n",
"!pip install -r requirements.txt"
],
"metadata": {
"id": "dmeEGY326c9j"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ezYsCrzNvF_M"
},
"outputs": [],
"source": [
"# Imports the print function from newer versions of python\n",
"from __future__ import print_function\n",
"\n",
"\n",
"# The Random module implements pseudo-random number generators\n",
"import random\n",
"\n",
"# Numpy is the main package for scientific computing with Python.\n",
"# This will be one of our most used libraries in this class\n",
"import numpy as np\n",
"\n",
"# The Time library helps us time code runtimes\n",
"import time\n",
"\n",
"# PIL (Pillow) is a useful library for opening, manipulating, and saving images\n",
"from PIL import Image\n",
"\n",
"# skimage (Scikit-Image) is a library for image processing\n",
"from skimage import color, io\n",
"\n",
"# Matplotlib is a useful plotting library for python\n",
"import matplotlib.pyplot as plt\n",
"# This code is to make matplotlib figures appear inline in the\n",
"# notebook rather than in a new window.\n",
"%matplotlib inline\n",
"plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\n",
"plt.rcParams['image.interpolation'] = 'nearest'\n",
"plt.rcParams['image.cmap'] = 'gray'"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true,
"id": "6RlGm4wAvF_M"
},
"source": [
"# Question 1: Linear Algebra and NumPy Review\n",
"In this section, we will review linear algebra and learn how to use vectors and matrices in python using numpy."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ntoNK39DvF_M"
},
"source": [
"## Question 1.1 (5 points)\n",
"First, let's test whether you can define the following matrices and vectors using numpy. Look up `np.array()` for help. In the next code block, define $M$ as a $(4, 3)$ matrix, $a$ as a $(1, 3)$ row vector and $b$ as a $(3, 1)$ column vector:\n",
"\n",
"$$M = \\begin{bmatrix}\n",
"1 & 2 & 3 \\\\\n",
"4 & 5 & 6 \\\\\n",
"7 & 8 & 9 \\\\\n",
"10 & 11 & 12 \\end{bmatrix}\n",
"$$\n",
"\n",
"$$a = \\begin{bmatrix}\n",
"1 & 1 & 0\n",
"\\end{bmatrix}\n",
"$$\n",
"\n",
"$$b = \\begin{bmatrix}\n",
"-1 \\\\ 2 \\\\ 5\n",
"\\end{bmatrix} \n",
"$$"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rmvtCYOivF_M"
},
"outputs": [],
"source": [
"### YOUR CODE HERE\n",
"pass\n",
"### END CODE HERE\n",
"print(\"M = \\n\", M)\n",
"print(\"The size of M is: \", M.shape)\n",
"print()\n",
"print(\"a = \", a)\n",
"print(\"The size of a is: \", a.shape)\n",
"print()\n",
"print(\"b = \", b)\n",
"print(\"The size of b is: \", b.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yld5HABqvF_N"
},
"source": [
"## Question 1.2 (5 points)\n",
"Implement the `dot_product()` method below and check that it returns the correct answer for $a^Tb$."
]
},
{
"cell_type": "code",
"source": [
"def dot_product(a, b):\n",
" \"\"\"Implement dot product between the two vectors: a and b.\n",
"\n",
" (optional): While you can solve this using for loops, we recommend\n",
" that you look up `np.dot()` online and use that instead.\n",
"\n",
" Args:\n",
" a: numpy array of shape (x, n)\n",
" b: numpy array of shape (n, x)\n",
"\n",
" Returns:\n",
" out: numpy array of shape (x, x) (scalar if x = 1)\n",
" \"\"\"\n",
" out = None\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE\n",
" return out"
],
"metadata": {
"id": "6T6tWrCPhP3e"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jhwxJIOIvF_N"
},
"outputs": [],
"source": [
"# Now, let's test out this dot product. Your answer should be [[1]].\n",
"aDotB = dot_product(a, b)\n",
"print(aDotB)\n",
"\n",
"print(\"The size is: \", aDotB.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9drZmTD2vF_N"
},
"source": [
"## Question 1.3 (5 points)\n",
"Implement the `complicated_matrix_function()` method below and use it to compute $(ab)Ma^T$\n",
"\n",
"IMPORTANT NOTE: The `complicated_matrix_function()` method expects all inputs to be two dimensional numpy arrays, as opposed to 1-D arrays. This is an important distinction, because 2-D arrays can be transposed, while 1-D arrays cannot.\n",
"\n",
"To transpose a 2-D array, you can use the syntax `array.T`"
]
},
{
"cell_type": "code",
"source": [
"def complicated_matrix_function(M, a, b):\n",
" \"\"\"Implement (a * b) * (M * a.T).\n",
"\n",
" (optional): Use the `dot_product(a, b)` function you wrote above\n",
" as a helper function.\n",
"\n",
" Args:\n",
" M: numpy matrix of shape (x, n).\n",
" a: numpy array of shape (1, n).\n",
" b: numpy array of shape (n, 1).\n",
"\n",
" Returns:\n",
" out: numpy matrix of shape (x, 1).\n",
" \"\"\"\n",
" out = None\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE\n",
"\n",
" return out"
],
"metadata": {
"id": "8i-HNs9Thl6j"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HxtlMhRZvF_N"
},
"outputs": [],
"source": [
"# Your answer should be $[[3], [9], [15], [21]]$ of shape(4, 1).\n",
"ans = complicated_matrix_function(M, a, b)\n",
"print(ans)\n",
"print()\n",
"print(\"The size is: \", ans.shape)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "enpz9sluvF_N"
},
"outputs": [],
"source": [
"M_2 = np.array(range(4)).reshape((2,2))\n",
"a_2 = np.array([[1,1]])\n",
"b_2 = np.array([[10, 10]]).T\n",
"print(M_2.shape)\n",
"print(a_2.shape)\n",
"print(b_2.shape)\n",
"print()\n",
"\n",
"# Your answer should be $[[20], [100]]$ of shape(2, 1).\n",
"ans = complicated_matrix_function(M_2, a_2, b_2)\n",
"print(ans)\n",
"print()\n",
"print(\"The size is: \", ans.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fzO1rzJyvF_N"
},
"source": [
"## Question 1.4 (10 points)\n",
"Implement `eigen_decomp()` and `get_eigen_values_and_vectors()` methods. In this method, perform eigenvalue decomposition on the following matrix and return the largest k eigen values and corresponding eigen vectors (k is specified in the method calls below).\n",
"\n",
"$$M = \\begin{bmatrix}\n",
"1 & 2 & 3 \\\\\n",
"4 & 5 & 6 \\\\\n",
"7 & 8 & 9 \\end{bmatrix}\n",
"$$\n"
]
},
{
"cell_type": "code",
"source": [
"def eigen_decomp(M):\n",
" \"\"\"Implement eigenvalue decomposition.\n",
"\n",
" (optional): You might find the `np.linalg.eig` function useful.\n",
"\n",
" Args:\n",
" matrix: numpy matrix of shape (m, m)\n",
"\n",
" Returns:\n",
" w: numpy array of shape (m,) such that the column v[:,i] is the eigenvector corresponding to the eigenvalue w[i].\n",
" v: Matrix where every column is an eigenvector.\n",
" \"\"\"\n",
" w = None\n",
" v = None\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE\n",
" return w, v"
],
"metadata": {
"id": "xfnH5CgthyOI"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"def get_eigen_values_and_vectors(M, k):\n",
" \"\"\"Return top k eigenvalues and eigenvectors of matrix M. By top k\n",
" here we mean the eigenvalues with the top ABSOLUTE values (lookup\n",
" np.argsort for a hint on how to do so.)\n",
"\n",
" (optional): Use the `eigen_decomp(M)` function you wrote above\n",
" as a helper function\n",
"\n",
" Args:\n",
" M: numpy matrix of shape (m, m).\n",
" k: number of eigen values and respective vectors to return.\n",
"\n",
" Returns:\n",
" eigenvalues: list of length k containing the top k eigenvalues\n",
" eigenvectors: list of length k containing the top k eigenvectors\n",
" of shape (m,)\n",
" \"\"\"\n",
" eigenvalues = []\n",
" eigenvectors = []\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE\n",
" return eigenvalues, eigenvectors"
],
"metadata": {
"id": "SXc6lqC2hy8B"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Yeeinwh5vF_N"
},
"outputs": [],
"source": [
"# Let's define M.\n",
"M = np.array([[1,2,3],[4,5,6],[7,8,9]])\n",
"\n",
"# Now let's grab the first eigenvalue and first eigenvector.\n",
"# You should get back a single eigenvalue and a single eigenvector.\n",
"val, vec = get_eigen_values_and_vectors(M[:,:3], 1)\n",
"print(\"First eigenvalue =\", val[0])\n",
"print()\n",
"print(\"First eigenvector =\", vec[0])\n",
"print()\n",
"assert len(vec) == 1\n",
"\n",
"# Now, let's get the first two eigenvalues and eigenvectors.\n",
"# You should get back a list of two eigenvalues and a list of two eigenvector arrays.\n",
"val, vec = get_eigen_values_and_vectors(M[:,:3], 2)\n",
"print(\"Eigenvalues =\", val)\n",
"print()\n",
"print(\"Eigenvectors =\", vec)\n",
"assert len(vec) == 2"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nPB5JDZ8vF_N"
},
"source": [
"## Question 1.5 (10 points)\n",
"\n",
"To wrap up our overview of NumPy, let's implement something fun — a helper function for computing the Euclidean distance between two $n$-dimensional points!\n",
"\n",
"In the 2-dimensional case, computing the Euclidean distance reduces to solving the Pythagorean theorem $c = \\sqrt{a^2 + b^2}$:\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"...where, given two points $(x_1, y_1)$ and $(x_2, y_2)$, $a = x_1 - x_2$ and $b = y_1 - y_2$.\n",
"\n",
"\n",
"More generally, given two $n$-dimensional vectors, the Euclidean distance can be computed by:\n",
"\n",
"1. Performing an elementwise subtraction between the two vectors, to get $n$ difference values.\n",
"2. Squaring each of the $n$ difference values, and summing the squares.\n",
"4. Taking the square root of our sum.\n",
"\n",
"Alternatively, the Euclidean distance between length-$n$ vectors $u$ and $v$ can be written as:\n",
"\n",
"$\n",
"\\quad\\textbf{distance}(u, v) = \\sqrt{\\sum_{i=1}^n (u_i - v_i)^2}\n",
"$\n",
"\n",
"\n",
"Try implementing this function: first using native Python with a `for` loop in the `euclidean_distance_native()` function, then in NumPy **without any loops** in the `euclidean_distance_numpy()` function.\n",
"We've added some `assert` statements here to help you check functionality (if it prints nothing, then your implementation is correct)!"
]
},
{
"cell_type": "code",
"source": [
"def euclidean_distance_native(u, v):\n",
" \"\"\"Computes the Euclidean distance between two vectors, represented as Python\n",
" lists.\n",
"\n",
" Args:\n",
" u (List[float]): A vector, represented as a list of floats.\n",
" v (List[float]): A vector, represented as a list of floats.\n",
"\n",
" Returns:\n",
" float: Euclidean distance between `u` and `v`.\n",
" \"\"\"\n",
" # First, run some checks:\n",
" assert isinstance(u, list)\n",
" assert isinstance(v, list)\n",
" assert len(u) == len(v)\n",
"\n",
" # Compute the distance!\n",
" # Notes:\n",
" # 1) Try breaking this problem down: first, we want to get\n",
" # the difference between corresponding elements in our\n",
" # input arrays. Then, we want to square these differences.\n",
" # Finally, we want to sum the squares and square root the\n",
" # sum.\n",
"\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE"
],
"metadata": {
"id": "bpc_6ytjh4gY"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NYboVRxgvF_N"
},
"outputs": [],
"source": [
"## Testing native Python function\n",
"assert euclidean_distance_native([7.0], [6.0]) == 1.0\n",
"assert euclidean_distance_native([7.0, 0.0], [3.0, 3.0]) == 5.0\n",
"assert euclidean_distance_native([7.0, 0.0, 0.0], [3.0, 0.0, 3.0]) == 5.0"
]
},
{
"cell_type": "code",
"source": [
"def euclidean_distance_numpy(u, v):\n",
" \"\"\"Computes the Euclidean distance between two vectors, represented as NumPy\n",
" arrays.\n",
"\n",
" Args:\n",
" u (np.ndarray): A vector, represented as a NumPy array.\n",
" v (np.ndarray): A vector, represented as a NumPy array.\n",
"\n",
" Returns:\n",
" float: Euclidean distance between `u` and `v`.\n",
" \"\"\"\n",
" # First, run some checks:\n",
" assert isinstance(u, np.ndarray)\n",
" assert isinstance(v, np.ndarray)\n",
" assert u.shape == v.shape\n",
"\n",
" # Compute the distance!\n",
" # Note:\n",
" # 1) You shouldn't need any loops\n",
" # 2) Some functions you can Google that might be useful:\n",
" # np.sqrt(), np.sum()\n",
" # 3) Try breaking this problem down: first, we want to get\n",
" # the difference between corresponding elements in our\n",
" # input arrays. Then, we want to square these differences.\n",
" # Finally, we want to sum the squares and square root the\n",
" # sum.\n",
"\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE"
],
"metadata": {
"id": "Sg2xfBDYh_WU"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cLiqmCF1vF_N"
},
"outputs": [],
"source": [
"## Testing NumPy function\n",
"assert euclidean_distance_numpy(\n",
" np.array([7.0]),\n",
" np.array([6.0])\n",
") == 1.0\n",
"assert euclidean_distance_numpy(\n",
" np.array([7.0, 0.0]),\n",
" np.array([3.0, 3.0])\n",
") == 5.0\n",
"assert euclidean_distance_numpy(\n",
" np.array([7.0, 0.0, 0.0]),\n",
" np.array([3.0, 0.0, 3.0])\n",
") == 5.0"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ej6SnRR3vF_N"
},
"source": [
"Next, let's take a look at how these two implementations compare in terms of runtime:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8Y4kckihvF_O"
},
"outputs": [],
"source": [
"n = 1000\n",
"\n",
"# Create some length-n lists and/or n-dimensional arrays\n",
"a = [0.0] * n\n",
"b = [10.0] * n\n",
"a_array = np.array(a)\n",
"b_array = np.array(b)\n",
"\n",
"# Compute runtime for native implementation\n",
"start_time = time.time()\n",
"for i in range(10000):\n",
" euclidean_distance_native(a, b)\n",
"print(\"Native:\", (time.time() - start_time), \"seconds\")\n",
"\n",
"# Compute runtime for numpy implementation\n",
"# Start by grabbing the current time in seconds\n",
"start_time = time.time()\n",
"for i in range(10000):\n",
" euclidean_distance_numpy(a_array, b_array)\n",
"print(\"NumPy:\", (time.time() - start_time), \"seconds\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qHrAKmETvF_O"
},
"source": [
"As you can see, doing vectorized calculations (i.e. no for loops) with NumPy results in significantly faster computations!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xLvGGjR6vF_O"
},
"source": [
"# Part 2: Image Manipulation\n",
"\n",
"Now that you are familiar with using matrices and vectors. Let's load some images and treat them as matrices and do some operations on them. Make sure you've followed the instructions at the top of the notebook (you've cloned `CS131_release` and are in the correct directory)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "r1qlIkmgvF_O"
},
"outputs": [],
"source": [
"# Run this code to set the locations of the images we will be using.\n",
"# You can change these paths to point to your own images if you want to try them out for fun.\n",
"\n",
"image1_path = 'image1.jpg'\n",
"image2_path = 'image2.jpg'\n",
"\n",
"def display(img):\n",
" # Show image\n",
" plt.figure(figsize = (5,5))\n",
" plt.imshow(img)\n",
" plt.axis('off')\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4W6YIau8vF_O"
},
"source": [
"## Question 2.1 (5 points)\n",
"Read the `display()` method above and implement the `load()` method below. We will use these two methods through the rest of the notebook to visualize our work."
]
},
{
"cell_type": "code",
"source": [
"def load(image_path):\n",
" \"\"\"Loads an image from a file path.\n",
"\n",
" HINT: Look up `skimage.io.imread()` function.\n",
"\n",
" Args:\n",
" image_path: file path to the image.\n",
"\n",
" Returns:\n",
" out: numpy array of shape(image_height, image_width, 3).\n",
" \"\"\"\n",
" out = None\n",
"\n",
" ### YOUR CODE HERE\n",
" # Use skimage io.imread\n",
" pass\n",
" ### END YOUR CODE\n",
"\n",
" # Let's convert the image to be between the correct range.\n",
" out = out.astype(np.float64) / 255\n",
" return out"
],
"metadata": {
"id": "Nlpa5yUIil2J"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8x0WZjnjvF_O"
},
"outputs": [],
"source": [
"image1 = load(image1_path)\n",
"\n",
"display(image1)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Avl1s7ArvF_O"
},
"source": [
"## Question 2.2 (5 points)\n",
"One of the most common operations we perform when working with images is rectangular **cropping**, or the action of removing unwanted outer areas of an image.\n",
"\n",
"Take a look at this code we've written to crop out everything but the eyes of our baboon from above:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "lbkv2zSJvF_O"
},
"outputs": [],
"source": [
"display(image1[10:60, 70:230, :])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WmjP_uU8vF_O"
},
"source": [
"Implement the `crop_image()` method by taking in the starting row index, starting column index, number of rows, and number of columns, and outputting the cropped image.\n",
"\n",
"Then, in the cell below, see if you can pull out a 100x100 square from each corner of the original `image1`: the top left, top right, bottom left, and bottom right."
]
},
{
"cell_type": "code",
"source": [
"def crop_image(image, start_row, start_col, num_rows, num_cols):\n",
" \"\"\"Crop an image based on the specified bounds.\n",
"\n",
" Args:\n",
" image: numpy array of shape(image_height, image_width, 3).\n",
" start_row (int): The starting row index we want to include in our cropped image.\n",
" start_col (int): The starting column index we want to include in our cropped image.\n",
" num_rows (int): Number of rows in our desired cropped image.\n",
" num_cols (int): Number of columns in our desired cropped image.\n",
"\n",
" Returns:\n",
" out: numpy array of shape(num_rows, num_cols, 3).\n",
" \"\"\"\n",
"\n",
" out = None\n",
"\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE\n",
"\n",
" return out"
],
"metadata": {
"id": "j7cIiOfOjI54"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "_Lq22ccsvF_O"
},
"outputs": [],
"source": [
"r, c = image1.shape[0], image1.shape[1]\n",
"\n",
"top_left_corner = crop_image(image1, 0, 0, 100, 100)\n",
"top_right_corner = crop_image(image1, 0, c-100, 100, 100)\n",
"bottom_left_corner = crop_image(image1, r-100, 0, 100, 100)\n",
"bottom_right_corner = crop_image(image1, r-100, c-100, 100, 100)\n",
"\n",
"display(top_left_corner)\n",
"display(top_right_corner)\n",
"display(bottom_left_corner)\n",
"display(bottom_right_corner)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QN7goHAEvF_O"
},
"source": [
"## Question 2.3 (10 points)\n",
"Implement the `dim_image()` method by converting images according to $x_n = 0.5*x_p^2$ for every pixel, where $x_n$ is the new value and $x_p$ is the original value.\n",
"\n",
"Note: Since all the pixel values of the image are in the range $[0, 1]$, the above formula will result in reducing these pixels values and therefore make the image dimmer."
]
},
{
"cell_type": "code",
"source": [
"def dim_image(image):\n",
" \"\"\"Change the value of every pixel by following\n",
"\n",
" x_n = 0.5*x_p^2\n",
"\n",
" where x_n is the new value and x_p is the original value.\n",
"\n",
" Args:\n",
" image: numpy array of shape(image_height, image_width, 3).\n",
"\n",
" Returns:\n",
" out: numpy array of shape(image_height, image_width, 3).\n",
" \"\"\"\n",
"\n",
" out = None\n",
"\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE\n",
"\n",
" return out"
],
"metadata": {
"id": "uQ9_nCFpjaU3"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QGGjaJOZvF_O"
},
"outputs": [],
"source": [
"new_image = dim_image(image1)\n",
"display(new_image)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-V_hzJPWvF_O"
},
"source": [
"## Question 2.4 (10 points)\n",
"Let's try another commonly used operation: image resizing!\n",
"\n",
"At a high level, image resizing should go something like this:\n",
"\n",
"1. We create an (initially empty) output array of the desired size, `output_image`\n",
"2. We iterate over each pixel position `(i,j)` in the output image\n",
" - For each output pixel, we compute a corresponding input pixel `(input_i, input_j)`\n",
" - We assign `output_image[i, j, :]` to `input_image[input_i, input_j, :]`\n",
"3. We return the resized output image\n",
"\n",
"We want `input_i` and `input_j` to increase proportionally with `i` and `j` respectively:\n",
"\n",
"- `input_i` can be computed as `int(i * row_scale_factor)`\n",
"- `input_j` can be computed as `int(j * col_scale_factor)`\n",
"\n",
"...where `int()` is a Python operation takes a float and rounds it down to the nearest integer, and `row_scale_factor` and `col_scale_factor` are constants computed from the image input/output sizes.\n",
"\n",
"Try to figure out what `row_scale_factor` and `col_scale_factor` should be, then implement this algorithm in the `resize_image()` method! Then, run the cells below to test out your image resizing algorithm!\n",
"\n",
"When you downsize the baboon to 16x16, you should expect an output that looks something like this:\n",
"\n",
"\n",
"\n",
"\n",
"When you stretch it horizontally to 50x400, you should get:\n",
"\n",
""
]
},
{
"cell_type": "code",
"source": [
"def resize_image(input_image, output_rows, output_cols):\n",
" \"\"\"Resize an image using the nearest neighbor method.\n",
"\n",
" Args:\n",
" input_image (np.ndarray): RGB image stored as an array, with shape\n",
" `(input_rows, input_cols, 3)`.\n",
" output_rows (int): Number of rows in our desired output image.\n",
" output_cols (int): Number of columns in our desired output image.\n",
"\n",
" Returns:\n",
" np.ndarray: Resized image, with shape `(output_rows, output_cols, 3)`.\n",
" \"\"\"\n",
" input_rows, input_cols, channels = input_image.shape\n",
" assert channels == 3\n",
"\n",
" # 1. Create the resized output image\n",
" output_image = np.zeros(shape=(output_rows, output_cols, 3))\n",
"\n",
" # 2. Populate the `output_image` array using values from `input_image`\n",
" # > This should require two nested for loops!\n",
"\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE\n",
"\n",
" # 3. Return the output image\n",
" return output_image"
],
"metadata": {
"id": "AJct3_IDjg_Y"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fPx49RnMvF_O"
},
"outputs": [],
"source": [
"display(resize_image(image1, 16, 16))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "h44Ui-EuvF_O"
},
"outputs": [],
"source": [
"display(resize_image(image1, 50, 400))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qYomF-wWvF_O"
},
"source": [
"**Question:** In the resize algorithm we describe above, the output is populated by iterating over the indices of the output image. Could we implement image resizing by iterating over the indices of the input image instead? How do the two approaches compare?\n",
"\n",
"> *Your response here!*"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pwT1j1SuvF_P"
},
"source": [
"## Question 2.5 (15 points)\n",
"\n",
"One more operation that you can try implementing is **image rotation**. This is part of a real interview question that we've encountered for actual computer vision jobs (notably at Facebook), and we expect it to require quite a bit more thinking."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MBNB_CRyvF_P"
},
"source": [
"#### a) Rotating 2D coordinates (5 points)\n",
"\n",
"Before we start thinking about rotating full images, let's start by taking a look at rotating `(x, y)` coordinates:\n",
"\n",
"\n",
"\n",
"Using `np.cos()` and `np.sin()`, implement the `rotate2d()` method to compute the coordinates $(x', y')$ rotated by theta radians from $(x, y)$.\n",
"\n",
"Once you've implemented the function, test your implementation below using the assert statements (if it prints nothing, then your implementation is correct):"
]
},
{
"cell_type": "code",
"source": [
"def rotate2d(point, theta):\n",
" \"\"\"Rotate a 2D coordinate by some angle theta.\n",
"\n",
" Args:\n",
" point (np.ndarray): A 1D NumPy array containing two values: an x and y coordinate.\n",
" theta (float): An theta to rotate by, in radians.\n",
"\n",
" Returns:\n",
" np.ndarray: A 1D NumPy array containing your rotated x and y values.\n",
" \"\"\"\n",
" assert point.shape == (2,)\n",
" assert isinstance(theta, float)\n",
"\n",
" # Reminder: np.cos() and np.sin() will be useful here!\n",
"\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE"
],
"metadata": {
"id": "U8ArvRCCjleM"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PIFcF4SHvF_P"
},
"outputs": [],
"source": [
"assert rotate2d(np.array([1.0, 0.0]), 0.0).shape == (\n",
" 2,\n",
"), \"Output shape incorrect!\"\n",
"assert np.allclose(\n",
" rotate2d(np.array([1.0, 0.0]), 0.0), np.array([1.0, 0.0])\n",
"), \"\"\n",
"assert np.allclose(\n",
" rotate2d(np.array([1.0, 0.0]), np.pi / 2.0), np.array([0.0, 1.0])\n",
"), \"\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ubUnR6OyvF_P"
},
"source": [
"Run the cell below to visualize a point as it's rotated around the origin by a set of evenly-spaced angles! You should see 30 points arranged in a circle."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "e8-xxZGIvF_P"
},
"outputs": [],
"source": [
"# Visualize a point being rotated around the origin\n",
"# We'll use the matplotlib library for this!\n",
"import matplotlib.pyplot as plt\n",
"\n",
"points = np.zeros((30, 2))\n",
"for i in range(30):\n",
" points[i, :] = rotate2d(np.array([1.0, 0.0]), i / 30.0 * (2 * np.pi))\n",
"\n",
"plt.scatter(points[:, 0], points[:, 1])\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OPv1JbosvF_P"
},
"source": [
"**Question:** Our function currently only rotates input points around the origin (0,0). Using the same `rotate2d` function, how could we rotate the point around a center that wasn't at the origin? **You'll need to do this when you implement image rotation below!**\n",
"\n",
"> *Your response here!*"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "i8A_51QPvF_P"
},
"source": [
"#### b) Rotate Image (10 points)\n",
"\n",
"Finally, use what you've learned about 2D rotations to create and implement the `rotate_image(input_image, theta)` function!\n",
"\n",
"For an input angle of $\\pi/4$ (45 degrees), the expected output is:\n",
"\n",
"\n",
"\n",
"**Hints:**\n",
"- We recommend basing your code off your `resize_image()` implementation, and applying the same general approach as before. Iterate over each pixel of an output image `(i, j)`, then fill in a color from a corresponding input pixel `(input_i, input_j)`. In this case, note that the output and input images should be the same size.\n",
"- If you run into an output pixel whose corresponding input coordinates `input_i` and `input_j` that are invalid, you can just ignore that pixel or set it to black.\n",
"- In our expected output above, we're rotating each coordinate around the center of the image, not the origin. (the origin is located at the top left)"
]
},
{
"cell_type": "code",
"source": [
"def rotate_image(input_image, theta):\n",
" \"\"\"Rotate an image by some angle theta.\n",
"\n",
" Args:\n",
" input_image (np.ndarray): RGB image stored as an array, with shape\n",
" `(input_rows, input_cols, 3)`.\n",
" theta (float): Angle to rotate our image by, in radians.\n",
"\n",
" Returns:\n",
" (np.ndarray): Rotated image, with the same shape as the input.\n",
" \"\"\"\n",
" input_rows, input_cols, channels = input_image.shape\n",
" assert channels == 3\n",
"\n",
" # 1. Create an output image with the same shape as the input\n",
" output_image = np.zeros_like(input_image)\n",
"\n",
" ### YOUR CODE HERE\n",
" pass\n",
" ### END YOUR CODE\n",
"\n",
" # 3. Return the output image\n",
" return output_image"
],
"metadata": {
"id": "D762vJNjjvcp"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "AhfyPHMrvF_U"
},
"outputs": [],
"source": [
"# Test that your output matches the expected output\n",
"display(rotate_image(image1, np.pi / 4.0))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
},
"colab": {
"provenance": []
}
},
"nbformat": 4,
"nbformat_minor": 0
}
================================================
FILE: spring_2026/hw0_release/requirements.txt
================================================
Jinja2>=2.11
MarkupSafe>=2.0
Pillow
PyWavelets
Pygments>=2.7
appnope>=0.1
astroid
bleach>=3.2
cycler>=0.10
decorator>=4.4
entrypoints>=0.3
html5lib>=1.1
isort
jedi>=0.17
jsonschema>=3.2
jupyter
lazy-object-proxy
matplotlib>=3.3
mccabe>=0.6
mistune>=0.8
nbconvert
nbformat>=5.0
networkx>=2.5
notebook
numpy
olefile>=0.46
pandocfilters>=1.4
pexpect>=4.8
pickleshare>=0.7
prompt-toolkit
ptyprocess>=0.6
pylint>=2.6
pyparsing>=2.4
python-dateutil>=2.8
pytz>=2020.1
pyzmq
qtconsole>=4.7
scikit-image
scipy
simplegeneric>=0.8
six>=1.15
terminado
testpath>=0.4
timeout-decorator>=0.4
tornado
wcwidth>=0.2
webencodings>=0.5
widgetsnbextension>=3.5
wrapt>=1.12
================================================
FILE: spring_2026/project1_release/README.md
================================================
# Project 1 Guidelines
CS131, Spring 2026
This project asks you to choose a concept from the selection below and expand on it creatively. You should choose a topic that seems most interesting to you. (This is a great starting point as exploration for a final project as well!)
Please see [Project 1 Guidelines](https://docs.google.com/document/d/1nJSd8TDlYso3aOhG9sLK2AIMHIrlvKPiIfKAZwGP0Bs/edit?tab=t.0#heading=h.wnjpcldre32f) for the most updated project guidelines.
## Main Notebook (50%):
Read through the notebooks to get a sense of which one might interest you the most. Select **one** of the following topics to complete the notebook:
- Option A: Filter and Object Detection (Lectures 2-3)
- Option B: Edge Detection and Hough Transform (Lecture 3)
- Option C: Harris corners, keypoint matching, and SIFT
## Exploration Notebook (50%):
The exploration notebook is a separate later submission that expands on the concepts from a notebook of choice.
While this section is open-ended in the topic that you choose to explore, we ask that you complete the following:
- Review of current methods (5%)
- Once you’ve selected a topic or project idea, explore the literature space. Has there been academic research on this topic? Are there tutorials online, software packages, or libraries?
- Select **at least** 5 resources (youtube videos, papers, tutorials, opensource software, libraries, etc) and provide a short description (2-3 sentences)
- Code (35%)
- We expect you to write code for this project (CS131 is, after all, a CS class 🙂). You may implement algorithms from scratch or expand on algorithms from this notebook if you would like, but using other libraries or other open-source software in a creative way is also sufficient. (ie. combining methods from different libraries). Here are some suggestions for starting points:
- Option A -
- Advanced filter methods- build on the filters from this notebook to emulate filters from social media or photo-editing apps!
- Advanced image recognition techniques (i.e. extend object detection problem to be more general, handle more objects, etc)
- Option B -
- Edge/lane detection using your own images - analyze the performance of edge detection algorithms on various different environments - when does edge detection do better? Worse? How can these be improved?
- Enhancing the edge detection algorithm
- Option C-
- Feature detection and keypoint matching using your own images - analyze the performance of Harris corners, descriptors, and matching across different scenes and viewpoints. When do these methods work better? Worse? How can they be improved?
- Enhancing the keypoint detection, descriptor, or matching pipeline
- Consult a TA if you’re unsure or confused on what to do!
- Writeup (10%)
- An explanation of what you did, and how it relates to the topic of choice. Please attach any images, figures, etc. (\~200 words)
## Submission Instructions:
_\[Will be updated once autograder is released]_
Please find the relevant Gradescope assignment for the option that you completed and submit that option only. You should submit 2 total Gradescope assignments:
- Option \[X] Notebook Code
- Option \[X] Written Component
================================================
FILE: spring_2026/project1_release/option_A/filters.py
================================================
"""
CS131 - Computer Vision: Foundations and Applications
Project 2 Option A
Author: Donsuk Lee (donlee90@stanford.edu)
Date created: 07/2017
Last modified: 2/5/2024
Python Version: 3.5+
"""
import numpy as np
def conv_nested(image, kernel):
"""A naive implementation of convolution filter.
This is a naive implementation of convolution using 4 nested for-loops.
This function computes convolution of an image with a kernel and outputs
the result that has the same shape as the input image.
Args:
image: numpy array of shape (Hi, Wi).
kernel: numpy array of shape (Hk, Wk). Dimensions will be odd.
Returns:
out: numpy array of shape (Hi, Wi).
"""
Hi, Wi = image.shape
Hk, Wk = kernel.shape
out = np.zeros((Hi, Wi))
### YOUR CODE HERE
pass
### END YOUR CODE
return out
def zero_pad(image, pad_height, pad_width):
""" Zero-pad an image.
Ex: a 1x1 image [[1]] with pad_height = 1, pad_width = 2 becomes:
[[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]] of shape (3, 5)
Args:
image: numpy array of shape (H, W).
pad_width: width of the zero padding (left and right padding).
pad_height: height of the zero padding (bottom and top padding).
Returns:
out: numpy array of shape (H+2*pad_height, W+2*pad_width).
"""
H, W = image.shape
out = None
### YOUR CODE HERE
pass
### END YOUR CODE
return out
def conv_fast(image, kernel):
""" An efficient implementation of convolution filter.
This function uses element-wise multiplication and np.sum()
to efficiently compute weighted sum of neighborhood at each
pixel.
Hints:
- Use the zero_pad function you implemented above
- There should be two nested for-loops
- You may find np.flip() and np.sum() useful
Args:
image: numpy array of shape (Hi, Wi).
kernel: numpy array of shape (Hk, Wk). Dimensions will be odd.
Returns:
out: numpy array of shape (Hi, Wi).
"""
Hi, Wi = image.shape
Hk, Wk = kernel.shape
out = np.zeros((Hi, Wi))
### YOUR CODE HERE
pass
### END YOUR CODE
return out
def cross_correlation(f, g):
""" Cross-correlation of image f and template g.
Hint: use the conv_fast function defined above.
Args:
f: numpy array of shape (Hf, Wf).
g: numpy array of shape (Hg, Wg).
Returns:
out: numpy array of shape (Hf, Wf).
"""
out = None
### YOUR CODE HERE
pass
### END YOUR CODE
return out
def zero_mean_cross_correlation(f, g):
""" Zero-mean cross-correlation of image f and template g.
Subtract the mean of g from g so that its mean becomes zero.
Hint: you should look up useful numpy functions online for calculating the mean.
Args:
f: numpy array of shape (Hf, Wf).
g: numpy array of shape (Hg, Wg).
Returns:
out: numpy array of shape (Hf, Wf).
"""
out = None
### YOUR CODE HERE
pass
### END YOUR CODE
return out
def normalized_cross_correlation(f, g):
""" Normalized cross-correlation of image f and template g.
Normalize the subimage of f and the template g at each step
before computing the weighted sum of the two.
Hint: you should look up useful numpy functions online for calculating
the mean and standard deviation.
Args:
f: numpy array of shape (Hf, Wf).
g: numpy array of shape (Hg, Wg).
Returns:
out: numpy array of shape (Hf, Wf).
"""
out = None
### YOUR CODE HERE
pass
### END YOUR CODE
return out
================================================
FILE: spring_2026/project1_release/option_A/option_a.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Project 1 - Option A\n",
"*This notebook is one of 3 options for Project 1. Part 1 includes both coding and written questions. Please hand in this notebook file with all the outputs and your answers to the written questions.*\n",
"\n",
"This notebook covers linear filters, convolution and correlation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Project 1 Structure\n",
"\n",
"Complete this notebook for the main Project 1 assignment due next week. The exploration component has been moved to a separate notebook named `option_a_exploration.ipynb`, which is due the following week.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Setup\n",
"from __future__ import print_function\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"from time import time\n",
"from skimage import io\n",
"\n",
"\n",
"%matplotlib inline\n",
"plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\n",
"plt.rcParams['image.interpolation'] = 'nearest'\n",
"plt.rcParams['image.cmap'] = 'gray'\n",
"\n",
"# for auto-reloading extenrnal modules\n",
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1: Convolutions\n",
"### 1.1 Commutative Property (5 points)\n",
"Recall that the convolution of an image $f:\\mathbb{R}^2\\rightarrow \\mathbb{R}$ and a kernel $h:\\mathbb{R}^2\\rightarrow\\mathbb{R}$ is defined as follows:\n",
"$$(f*h)[m,n]=\\sum_{i=-\\infty}^\\infty\\sum_{j=-\\infty}^\\infty f[i,j]\\cdot h[m-i,n-j]$$\n",
"\n",
"Or equivalently,\n",
"\\begin{align}\n",
"(f*h)[m,n] &= \\sum_{i=-\\infty}^\\infty\\sum_{j=-\\infty}^\\infty h[i,j]\\cdot f[m-i,n-j]\\\\\n",
"&= (h*f)[m,n]\n",
"\\end{align}\n",
"\n",
"Show that this is true (i.e. prove that the convolution operator is commutative: $f*h = h*f$)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Your Answer:** *Write your solution in this markdown cell. Please write your equations in [LaTex equations](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html).*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1.2 Shift Invariance (5 points)\n",
"Let $f$ be a function $\\mathbb{R}^2\\rightarrow\\mathbb{R}$. Consider a system $f\\xrightarrow{s}g$, where $g=(f*h)$ with some kernel $h:\\mathbb{R}^2\\rightarrow\\mathbb{R}$. Also consider functions $f'[m,n] = f[m-m_0, n-n_0]$ and $g'[m,n] = g[m-m_0, n-n_0]$. \n",
"\n",
"Show that $S$ defined by any kernel $h$ is a shift invariant system by showing that $g' = (f'*h)$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Your Answer:** *Write your solution in this markdown cell. Please write your equations in [LaTex equations](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html).*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"### 1.3 Linearity (10 points)\n",
"\n",
"Recall that a system S is considered a linear system if and only if it satisfies the superposition property. In mathematical terms, a (function) S is a linear invariant system iff it satisfies:\n",
"\n",
"$$\n",
"S\\{\\alpha f_1[n,m] + \\beta f_2[n,m]\\} = \\alpha S\\{f_1[n,m]\\} + \\beta S\\{f_2[n,m]\\}\n",
"$$\n",
"\n",
"Let $f_1$ and $f_2$ be functions $\\mathbb{R}^2\\rightarrow\\mathbb{R}$. Consider a system $f\\xrightarrow{s}g$, where $g=(f*h)$ with some kernel $h:\\mathbb{R}^2\\rightarrow\\mathbb{R}$. \n",
"\n",
"Prove that $S$ defined by any kernel $h$ is linear by showing that the superposition property holds."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Your Answer:** *Write your solution in this markdown cell. Please write your equations in [LaTex equations](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html).*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1.4 Implementation (30 points)\n",
"\n",
"In this section, you will implement two versions of convolution:\n",
"- `conv_nested`\n",
"- `conv_fast`\n",
"\n",
"First, run the code cell below to load the image to work with."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Open image as grayscale\n",
"img = io.imread('dog.jpg', as_gray=True)\n",
"\n",
"# Show image\n",
"plt.imshow(img)\n",
"plt.axis('off')\n",
"plt.title(\"Isn't he cute?\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, implement the function **`conv_nested`** in **`filters.py`**. This is a naive implementation of convolution which uses 4 nested for-loops. It takes an image $f$ and a kernel $h$ as inputs and outputs the convolved image $(f*h)$ that has the same shape as the input image. This implementation should take a few seconds to run.\n",
"\n",
"*- Hint: It may be easier to implement $(h*f)$*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll first test your `conv_nested` function on a simple input."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from filters import conv_nested\n",
"\n",
"# Simple convolution kernel.\n",
"kernel = np.array(\n",
"[\n",
" [1,0,1],\n",
" [0,0,0],\n",
" [1,0,0]\n",
"])\n",
"\n",
"# Create a test image: a white square in the middle\n",
"test_img = np.zeros((9, 9))\n",
"test_img[3:6, 3:6] = 1\n",
"\n",
"# Run your conv_nested function on the test image\n",
"test_output = conv_nested(test_img, kernel)\n",
"\n",
"# Build the expected output\n",
"expected_output = np.zeros((9, 9))\n",
"expected_output[2:7, 2:7] = 1\n",
"expected_output[5:, 5:] = 0\n",
"expected_output[4, 2:5] = 2\n",
"expected_output[2:5, 4] = 2\n",
"expected_output[4, 4] = 3\n",
"\n",
"# Plot the test image\n",
"plt.subplot(1,3,1)\n",
"plt.imshow(test_img)\n",
"plt.title('Test image')\n",
"plt.axis('off')\n",
"\n",
"# Plot your convolved image\n",
"plt.subplot(1,3,2)\n",
"plt.imshow(test_output)\n",
"plt.title('Convolution')\n",
"plt.axis('off')\n",
"\n",
"# Plot the exepected output\n",
"plt.subplot(1,3,3)\n",
"plt.imshow(expected_output)\n",
"plt.title('Exepected output')\n",
"plt.axis('off')\n",
"plt.show()\n",
"\n",
"# Test if the output matches expected output\n",
"assert np.max(test_output - expected_output) < 1e-10, \"Your solution is not correct.\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's test your `conv_nested` function on a real image."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from filters import conv_nested\n",
"\n",
"# Simple convolution kernel.\n",
"# Feel free to change the kernel to see different outputs.\n",
"kernel = np.array(\n",
"[\n",
" [1,0,-1],\n",
" [2,0,-2],\n",
" [1,0,-1]\n",
"])\n",
"\n",
"out = conv_nested(img, kernel)\n",
"\n",
"# Plot original image\n",
"plt.subplot(2,2,1)\n",
"plt.imshow(img)\n",
"plt.title('Original')\n",
"plt.axis('off')\n",
"\n",
"# Plot your convolved image\n",
"plt.subplot(2,2,3)\n",
"plt.imshow(out)\n",
"plt.title('Convolution')\n",
"plt.axis('off')\n",
"\n",
"# Plot what you should get\n",
"solution_img = io.imread('convolved_dog.png', as_gray=True)\n",
"plt.subplot(2,2,4)\n",
"plt.imshow(solution_img)\n",
"plt.title('What you should get')\n",
"plt.axis('off')\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let us implement a more efficient version of convolution using array operations in numpy. As shown in the lecture, a convolution can be considered as a sliding window that computes sum of the pixel values weighted by the flipped kernel. The faster version will i) zero-pad an image, ii) flip the kernel horizontally and vertically, and iii) compute weighted sum of the neighborhood at each pixel.\n",
"\n",
"First, implement the function **`zero_pad`** in **`filters.py`**.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from filters import zero_pad\n",
"\n",
"pad_width = 20 # width of the padding on the left and right\n",
"pad_height = 40 # height of the padding on the top and bottom\n",
"\n",
"padded_img = zero_pad(img, pad_height, pad_width)\n",
"\n",
"# Plot your padded dog\n",
"plt.subplot(1,2,1)\n",
"plt.imshow(padded_img)\n",
"plt.title('Padded dog')\n",
"plt.axis('off')\n",
"\n",
"# Plot what you should get\n",
"solution_img = io.imread('padded_dog.jpg', as_gray=True)\n",
"plt.subplot(1,2,2)\n",
"plt.imshow(solution_img)\n",
"plt.title('What you should get')\n",
"plt.axis('off')\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, complete the function **`conv_fast`** in **`filters.py`** using `zero_pad`. Run the code below to compare the outputs by the two implementations. `conv_fast` should run noticeably faster than `conv_nested`. \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from filters import conv_fast\n",
"\n",
"t0 = time()\n",
"out_fast = conv_fast(img, kernel)\n",
"t1 = time()\n",
"out_nested = conv_nested(img, kernel)\n",
"t2 = time()\n",
"\n",
"# Compare the running time of the two implementations\n",
"print(\"conv_nested: took %f seconds.\" % (t2 - t1))\n",
"print(\"conv_fast: took %f seconds.\" % (t1 - t0))\n",
"\n",
"# Plot conv_nested output\n",
"plt.subplot(1,2,1)\n",
"plt.imshow(out_nested)\n",
"plt.title('conv_nested')\n",
"plt.axis('off')\n",
"\n",
"# Plot conv_fast output\n",
"plt.subplot(1,2,2)\n",
"plt.imshow(out_fast)\n",
"plt.title('conv_fast')\n",
"plt.axis('off')\n",
"\n",
"# Make sure that the two outputs are the same\n",
"if not (np.max(out_fast - out_nested) < 1e-10):\n",
" print(\"Different outputs! Check your implementation.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"## Part 2: Cross-correlation\n",
"\n",
"Cross-correlation of an image $f$ with a template $g$ is defined as follows:\n",
"$$(g ** f)[m,n]=\\sum_{i=-\\infty}^\\infty\\sum_{j=-\\infty}^\\infty g[i,j]\\cdot f[m + i,n + j]$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.1 Template Matching with Cross-correlation (12 points)\n",
"Suppose that you are a clerk at a grocery store. One of your responsibilites is to check the shelves periodically and stock them up whenever there are sold-out items. You got tired of this laborious task and decided to build a computer vision system that keeps track of the items on the shelf.\n",
"\n",
"Luckily, you have learned in CS131 that cross-correlation can be used for template matching: a template $g$ is multiplied with regions of a larger image $f$ to measure how similar each region is to the template.\n",
"\n",
"The template of a product (`template.jpg`) and the image of shelf (`shelf.jpg`) is provided. We will use cross-correlation to find the product in the shelf.\n",
"\n",
"Implement **`cross_correlation`** function in **`filters.py`** and run the code below.\n",
"\n",
"*- Hint: you may use the `conv_fast` function you implemented in the previous question.*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from filters import cross_correlation\n",
"\n",
"# Load template and image in grayscale\n",
"img = io.imread('shelf.jpg')\n",
"img_gray = io.imread('shelf.jpg', as_gray=True)\n",
"temp = io.imread('template.jpg')\n",
"temp_gray = io.imread('template.jpg', as_gray=True)\n",
"\n",
"# Perform cross-correlation between the image and the template\n",
"out = cross_correlation(img_gray, temp_gray)\n",
"\n",
"# Find the location with maximum similarity\n",
"y,x = (np.unravel_index(out.argmax(), out.shape))\n",
"\n",
"# Display product template\n",
"plt.figure(figsize=(25,20))\n",
"plt.subplot(3, 1, 1)\n",
"plt.imshow(temp)\n",
"plt.title('Template')\n",
"plt.axis('off')\n",
"\n",
"# Display cross-correlation output\n",
"plt.subplot(3, 1, 2)\n",
"plt.imshow(out)\n",
"plt.title('Cross-correlation (white means more correlated)')\n",
"plt.axis('off')\n",
"\n",
"# Display image\n",
"plt.subplot(3, 1, 3)\n",
"plt.imshow(img)\n",
"plt.title('Result (blue marker on the detected location)')\n",
"plt.axis('off')\n",
"\n",
"# Draw marker at detected location\n",
"plt.plot(x, y, 'bx', ms=40, mew=10)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Interpretation\n",
"How does the output of cross-correlation filter look? Explain what problems there might be with using a raw template as a filter."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Your Answer:** *Write your solution in this markdown cell.*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"### 2.2 Zero-mean cross-correlation (6 points)\n",
"A solution to this problem is to subtract the mean value of the template so that it has zero mean.\n",
"\n",
"Implement **`zero_mean_cross_correlation`** function in **`filters.py`** and run the code below.\n",
"\n",
"**If your implementation is correct, you should see the blue cross centered over the correct cereal box.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from filters import zero_mean_cross_correlation\n",
"\n",
"# Perform cross-correlation between the image and the template\n",
"out = zero_mean_cross_correlation(img_gray, temp_gray)\n",
"\n",
"# Find the location with maximum similarity\n",
"y,x = np.unravel_index(out.argmax(), out.shape)\n",
"\n",
"# Display product template\n",
"plt.figure(figsize=(30,20))\n",
"plt.subplot(3, 1, 1)\n",
"plt.imshow(temp)\n",
"plt.title('Template')\n",
"plt.axis('off')\n",
"\n",
"# Display cross-correlation output\n",
"plt.subplot(3, 1, 2)\n",
"plt.imshow(out)\n",
"plt.title('Cross-correlation (white means more correlated)')\n",
"plt.axis('off')\n",
"\n",
"# Display image\n",
"plt.subplot(3, 1, 3)\n",
"plt.imshow(img)\n",
"plt.title('Result (blue marker on the detected location)')\n",
"plt.axis('off')\n",
"\n",
"# Draw marker at detected location\n",
"plt.plot(x, y, 'bx', ms=40, mew=10)\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also determine whether the product is present with appropriate scaling and thresholding."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def check_product_on_shelf(shelf, product):\n",
" out = zero_mean_cross_correlation(shelf, product)\n",
" \n",
" # Scale output by the size of the template\n",
" out = out / float(product.shape[0]*product.shape[1])\n",
" \n",
" # Threshold output (this is arbitrary, you would need to tune the threshold for a real application)\n",
" out = out > 0.025\n",
" \n",
" if np.sum(out) > 0:\n",
" print('The product is on the shelf')\n",
" else:\n",
" print('The product is not on the shelf')\n",
"\n",
"# Load image of the shelf without the product\n",
"img2 = io.imread('shelf_soldout.jpg')\n",
"img2_gray = io.imread('shelf_soldout.jpg', as_gray=True)\n",
"\n",
"plt.imshow(img)\n",
"plt.axis('off')\n",
"plt.show()\n",
"check_product_on_shelf(img_gray, temp_gray)\n",
"\n",
"plt.imshow(img2)\n",
"plt.axis('off')\n",
"plt.show()\n",
"check_product_on_shelf(img2_gray, temp_gray)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"### 2.3 Normalized Cross-correlation (12 points)\n",
"One day the light near the shelf goes out and the product tracker starts to malfunction. The `zero_mean_cross_correlation` is not robust to change in lighting condition. The code below demonstrates this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from filters import normalized_cross_correlation\n",
"\n",
"# Load image\n",
"img = io.imread('shelf_dark.jpg')\n",
"img_gray = io.imread('shelf_dark.jpg', as_gray=True)\n",
"\n",
"# Perform cross-correlation between the image and the template\n",
"out = zero_mean_cross_correlation(img_gray, temp_gray)\n",
"\n",
"# Find the location with maximum similarity\n",
"y, x = np.unravel_index(out.argmax(), out.shape)\n",
"\n",
"# Display image\n",
"plt.imshow(img)\n",
"plt.title('Result (red marker on the detected location)')\n",
"plt.axis('off')\n",
"\n",
"# Draw marker at detcted location\n",
"plt.plot(x, y, 'rx', ms=25, mew=5)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A solution is to normalize the pixels of the image and template at every step before comparing them. This is called **normalized cross-correlation**.\n",
"\n",
"The mathematical definition for normalized cross-correlation of $f$ and template $g$ is:\n",
"$$(g \\star f)[m,n]=\\sum_{i,j} \\frac{g[i, j]-\\overline{g}}{\\sigma_g} \\cdot \\frac{f[m + i, n + j]-\\overline{f_{m,n}}}{\\sigma_{f_{m,n}}}$$\n",
"\n",
"where:\n",
"- $f_{m,n}$ is the patch image at position $(m,n)$\n",
"- $\\overline{f_{m,n}}$ is the mean of the patch image $f_{m,n}$\n",
"- $\\sigma_{f_{m,n}}$ is the standard deviation of the patch image $f_{m,n}$ \n",
"- $\\overline{g}$ is the mean of the template $g$\n",
"- $\\sigma_g$ is the standard deviation of the template $g$\n",
"\n",
"Implement **`normalized_cross_correlation`** function in **`filters.py`** and run the code below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from filters import normalized_cross_correlation\n",
"\n",
"# Perform normalized cross-correlation between the image and the template\n",
"out = normalized_cross_correlation(img_gray, temp_gray)\n",
"\n",
"# Find the location with maximum similarity\n",
"y, x = np.unravel_index(out.argmax(), out.shape)\n",
"\n",
"# Display image\n",
"plt.imshow(img)\n",
"plt.title('Result (red marker on the detected location)')\n",
"plt.axis('off')\n",
"\n",
"# Draw marker at detcted location\n",
"plt.plot(x, y, 'rx', ms=25, mew=5)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 3: Separable Filters"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.1 Theory (10 points)\n",
"Consider an $M_1\\times{N_1}$ image $I$ and an $M_2\\times{N_2}$ filter $F$. A filter $F$ is **separable** if it can be written as a product of two 1D filters: $F=F_1F_2$.\n",
"\n",
"For example,\n",
"$$F=\n",
"\\begin{bmatrix}\n",
"1 & -1 \\\\\n",
"1 & -1\n",
"\\end{bmatrix}\n",
"$$\n",
"can be written as a matrix product of\n",
"$$F_1=\n",
"\\begin{bmatrix}\n",
"1 \\\\\n",
"1\n",
"\\end{bmatrix},\n",
"F_2=\n",
"\\begin{bmatrix}\n",
"1 & -1\n",
"\\end{bmatrix}\n",
"$$\n",
"Therefore $F$ is a separable filter.\n",
"\n",
"Prove that for any separable filter $F=F_1F_2$,\n",
"$$I*F=(I*F_1)*F_2$$\n",
"where $*$ is the convolution operation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Your Answer:** *Write your solution in this markdown cell. Please write your equations in [LaTex equations](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html).*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.2 Complexity comparison (10 points)\n",
"Consider an $M_1\\times{N_1}$ image $I$ and an $M_2\\times{N_2}$ filter $F$ that is separable (i.e. $F=F_1F_2$).\n",
"\n",
"(i) How many multiplication operations do you need to do a direct 2D convolution (i.e. $I*F$)?<br>\n",
"(ii) How many multiplication operations do you need to do 1D convolutions on rows and columns (i.e. $(I*F_1)*F_2$)?<br>\n",
"(iii) Use Big-O notation, written with respet to the dimensions $M_1$, $N_1$, $M_2$, and $N_2$, to argue which one is more efficient in general: direct 2D convolution or two successive 1D convolutions?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Your Answer:** *Write your solution in this markdown cell. Please write your equations in [LaTex equations](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html).*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we will empirically compare the running time of a separable 2D convolution and its equivalent two 1D convolutions. The Gaussian kernel, widely used for blurring images, is one example of a separable filter. Run the code below to see its effect."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load image\n",
"img = io.imread('dog.jpg', as_gray=True)\n",
"\n",
"# 5x5 Gaussian blur\n",
"kernel = np.array([\n",
" [1,4,6,4,1],\n",
" [4,16,24,16,4],\n",
" [6,24,36,24,6],\n",
" [4,16,24,16,4],\n",
" [1,4,6,4,1]\n",
"])\n",
"\n",
"t0 = time()\n",
"out = conv_nested(img, kernel)\n",
"t1 = time()\n",
"t_normal = t1 - t0\n",
"\n",
"# Plot original image\n",
"plt.subplot(1,2,1)\n",
"plt.imshow(img)\n",
"plt.title('Original')\n",
"plt.axis('off')\n",
"\n",
"# Plot convolved image\n",
"plt.subplot(1,2,2)\n",
"plt.imshow(out)\n",
"plt.title('Blurred')\n",
"plt.axis('off')\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the below code cell, define the two 1D arrays (`k1` and `k2`) whose product is equal to the Gaussian kernel."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# The kernel can be written as outer product of two 1D filters\n",
"k1 = None # shape (5, 1)\n",
"k2 = None # shape (1, 5)\n",
"\n",
"### YOUR CODE HERE\n",
"pass\n",
"### END YOUR CODE\n",
"\n",
"# Check if kernel is product of k1 and k2\n",
"if not np.all(k1 * k2 == kernel):\n",
" print('k1 * k2 is not equal to kernel')\n",
" \n",
"assert k1.shape == (5, 1), \"k1 should have shape (5, 1)\"\n",
"assert k2.shape == (1, 5), \"k2 should have shape (1, 5)\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now apply the two versions of convolution to the same image, and compare their running time. Note that the outputs of the two convolutions must be the same."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Perform two convolutions using k1 and k2\n",
"t0 = time()\n",
"out_separable = conv_nested(img, k1)\n",
"out_separable = conv_nested(out_separable, k2)\n",
"t1 = time()\n",
"t_separable = t1 - t0\n",
"\n",
"# Plot normal convolution image\n",
"plt.subplot(1,2,1)\n",
"plt.imshow(out)\n",
"plt.title('Normal convolution')\n",
"plt.axis('off')\n",
"\n",
"# Plot separable convolution image\n",
"plt.subplot(1,2,2)\n",
"plt.imshow(out_separable)\n",
"plt.title('Separable convolution')\n",
"plt.axis('off')\n",
"\n",
"plt.show()\n",
"\n",
"print(\"Normal convolution: took %f seconds.\" % (t_normal))\n",
"print(\"Separable convolution: took %f seconds.\" % (t_separable))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check if the two outputs are equal\n",
"assert np.max(out_separable - out) < 1e-8"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Separate Exploration Notebook\n",
"\n",
"The exploration component for this option now lives in `option_a_exploration.ipynb`. Complete this main notebook first, then complete the separate exploration notebook for the later due date.\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
================================================
FILE: spring_2026/project1_release/option_A/option_a_exploration.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Option A Exploration\n",
"\n",
"This notebook contains the exploration component for Option A, due the week after the main notebook. Choose something related to the topics covered in the main notebook and build something creative or interesting. \n",
"\n",
"_For this notebook some helpful staring points for the extension include:_\n",
"* **Advanced filter methods:** Build on the filters from this notebook to emulate filters from social media or photo-editing apps!\n",
"* **Advanced image recognition techniques:** Extend the object detection problem to be more general, handle more objects, etx\n",
"\n",
"For more detailed instructions, please see the [Project 1 Guidelines](https://docs.google.com/document/d/1_yLzSePaVH2OrQgzZsu-1sZ4ez1Da9_tFzZFM3LCarQ/edit?usp=sharing)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Review of current methods (10 points) \n",
"Once you\u2019ve selected a topic or project idea, explore the literature space. Has there been academic research on this topic? Are there tutorials online, software packages, or libraries? \n",
"\n",
"Select at least 5 resources (youtube videos, papers, tutorials, opensource software, libraries, etc) and provide a short description (2-3 sentences) below: \n",
"\n",
"* Source 1: \n",
"* Source 2: \n",
"* Source 3: \n",
"* Source 4: \n",
"* Source 5: "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Code (70 points) \n",
"We expect you to write code for this project (CS131 is, after all, a CS class \ud83d\ude42). You may implement algorithms from scratch or expand on algorithms from this notebook if you would like, but using other libraries or other open-source software in a creative way is also sufficient. \n",
"\n",
"You not required to develop your code in this notebook! Feel free to create your own jupyter notebook for the project or write code in your environment of choice! (Jupyter notebook or google colab are good starting options)!\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Please add your code here\n",
"# If you do not write your code in this notebook, please attach a link to any code that you wrote! "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Writeup (20 points)\n",
"\n",
"An explanation of what you did, and how it relates to the topic of choice. (~200 words) Please attach any images, figures, etc.\n",
"\n",
"_You may also add a link to your writeup if that is easier!_"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
================================================
FILE: spring_2026/project1_release/option_B/edge.py
================================================
"""
CS131 - Computer Vision: Foundations and Applications
Project 2 Option B
Author: Donsuk Lee (donlee90@stanford.edu)
Date created: 07/2017
Last modified: 10/25/2022
Python Version: 3.5+
"""
import numpy as np
def conv(image, kernel):
""" An implementation of convolution filter.
This function uses element-wise multiplication and np.sum()
to efficiently compute weighted sum of neighborhood at each
pixel.
Args:
image: numpy array of shape (Hi, Wi).
kernel: numpy array of shape (Hk, Wk).
Returns:
out: numpy array of shape (Hi, Wi).
"""
Hi, Wi = image.shape
Hk, Wk = kernel.shape
out = np.zeros((Hi, Wi))
# For this assignment, we will use edge values to pad the images.
# Zero padding will make derivatives at the image boundary very big,
# whereas we want to ignore the edges at the boundary.
pad_width0 = Hk // 2
pad_width1 = Wk // 2
pad_width = ((pad_width0,pad_width0),(pad_width1,pad_width1))
padded = np.pad(image, pad_width, mode='edge')
### YOUR CODE HERE
pass
### END YOUR CODE
return out
def gaussian_kernel(size, sigma):
""" Implementation of Gaussian Kernel.
This function follows the gaussian kernel formula,
and creates a kernel matrix.
Hints:
- Use np.pi and np.exp to compute pi and exp.
Args:
size: int of the size of output matrix.
sigma: float of sigma to calculate kernel.
Returns:
kernel: numpy array of shape (size, size).
"""
kernel = np.zeros((size, size))
### YOUR CODE HERE
pass
### END YOUR CODE
return kernel
def partial_x(img):
""" Computes partial x-derivative of input img.
Hints:
- You may use the conv function in defined in this file.
Args:
img: numpy array of shape (H, W).
Returns:
out: x-derivative image.
"""
out = None
### YOUR CODE HERE
pass
### END YOUR CODE
return out
def partial_y(img):
""" Computes partial y-derivative of input img.
Hints:
- You may use the conv function in defined in this file.
Args:
img: numpy array of shape (H, W).
Returns:
out: y-derivative image.
"""
out = None
### YOUR CODE HERE
pass
### END YOUR CODE
return out
def gradient(img):
""" Returns gradient magnitude and direction of input img.
Args:
img: Grayscale image. Numpy array of shape (H, W).
Returns:
G: Magnitude of gradient at each pixel in img.
Numpy array of shape (H, W).
theta: Direction(in degrees, 0 <= theta < 360) of gradient
at each pixel in img. Numpy array of shape (H, W).
Hints:
- Use np.sqrt and np.arctan2 to calculate square root and arctan
"""
G = np.zeros(img.shape)
theta = np.zeros(img.shape)
### YOUR CODE HERE
pass
### END YOUR CODE
return G, theta
def non_maximum_suppression(G, theta):
""" Performs non-maximum suppression.
This function performs non-maximum suppression along the direction
of gradient (theta) on the gradient magnitude image (G).
Args:
G: gradient magnitude image with shape of (H, W).
theta: direction of gradients with shape of (H, W).
Returns:
out: non-maxima suppressed image.
"""
H, W = G.shape
out = np.zeros((H, W))
# Round the gradient direction to the nearest 45 degrees
theta = np.floor((theta + 22.5) / 45) * 45
theta = (theta % 360.0).astype(np.int32)
#print(G)
### BEGIN YOUR CODE
pass
### END YOUR CODE
return out
def double_thresholding(img, high, low):
"""
Args:
img: numpy array of shape (H, W) representing NMS edge response.
high: high threshold(float) for strong edges.
low: low threshold(float) for weak edges.
Returns:
strong_edges: Boolean array representing strong edges.
Strong edeges are the pixels with the values greater than
the higher threshold.
weak_edges: Boolean array representing weak edges.
Weak edges are the pixels with the values smaller or equal to the
higher threshold and greater than the lower threshold.
"""
strong_edges = np.zeros(img.shape, dtype=np.bool)
weak_edges = np.zeros(img.shape, dtype=np.bool)
### YOUR CODE HERE
pass
### END YOUR CODE
return strong_edges, weak_edges
def get_neighbors(y, x, H, W):
""" Return indices of valid neighbors of (y, x).
Return indices of all the valid neighbors of (y, x) in an array of
shape (H, W). An index (i, j) of a valid neighbor should satisfy
the following:
1. i >= 0 and i < H
2. j >= 0 and j < W
3. (i, j) != (y, x)
Args:
y, x: location of the pixel.
H, W: size of the image.
Returns:
neighbors: list of indices of neighboring pixels [(i, j)].
"""
neighbors = []
for i in (y-1, y, y+1):
for j in (x-1, x, x+1):
if i >= 0 and i < H and j >= 0 and j < W:
if (i == y and j == x):
continue
neighbors.append((i, j))
return neighbors
def link_edges(strong_edges, weak_edges):
""" Find weak edges connected to strong edges and link them.
Iterate over each pixel in strong_edges and perform breadth first
search across the connected pixels in weak_edges to link them.
Here we consider a pixel (a, b) is connected to a pixel (c, d)
if (a, b) is one of the eight neighboring pixels of (c, d).
Args:
strong_edges: binary image of shape (H, W).
weak_edges: binary image of shape (H, W).
Returns:
edges: numpy boolean array of shape(H, W).
"""
H, W = strong_edges.shape
indices = np.stack(np.nonzero(strong_edges)).T
edges = np.zeros((H, W), dtype=np.bool)
# Make new instances of arguments to leave the original
# references intact
weak_edges = np.copy(weak_edges)
edges = np.copy(strong_edges)
### YOUR CODE HERE
pass
### END YOUR CODE
return edges
def canny(img, kernel_size=5, sigma=1.4, high=20, low=15):
""" Implement canny edge detector by calling functions above.
Args:
img: binary image of shape (H, W).
kernel_size: int of size for kernel matrix.
sigma: float for calculating kernel.
high: high threshold for strong edges.
low: low threashold for weak edges.
Returns:
edge: numpy array of shape(H, W).
"""
### YOUR CODE HERE
pass
### END YOUR CODE
return edge
def hough_transform(img):
""" Transform points in the input image into Hough space.
Use the parameterization:
rho = x * cos(theta) + y * sin(theta)
to transform a point (x,y) to a sine-like function in Hough space.
Args:
img: binary image of shape (H, W).
Returns:
accumulator: numpy array of shape (m, n).
rhos: numpy array of shape (m, ).
thetas: numpy array of shape (n, ).
"""
# Set rho and theta ranges
W, H = img.shape
diag_len = int(np.ceil(np.sqrt(W * W + H * H)))
rhos = np.linspace(-diag_len, diag_len, diag_len * 2 + 1)
thetas = np.deg2rad(np.arange(-90.0, 90.0))
# Cache some reusable values
cos_t = np.cos(thetas)
sin_t = np.sin(thetas)
num_thetas = len(thetas)
# Initialize accumulator in the Hough space
accumulator = np.zeros((2 * diag_len + 1, num_thetas), dtype=np.uint64)
ys, xs = np.nonzero(img)
# Transform each point (x, y) in image
# Find rho corresponding to values in thetas
# and increment the accumulator in the corresponding coordiate.
### YOUR CODE HERE
pass
### END YOUR CODE
return accumulator, rhos, thetas
================================================
FILE: spring_2026/project1_release/option_B/images/gt/105.pgm.gtf.pgm
================================================
P5
640 473
255
================================================
FILE: spring_2026/project1_release/option_B/images/gt/106.pgm.gtf.pgm
================================================
P5
577 435
255
gitextract_pn6zgoo8/
├── .gitignore
├── LICENSE
├── README.md
└── spring_2026/
├── README.md
├── hw0_release/
│ ├── README.md
│ ├── hw0.ipynb
│ └── requirements.txt
└── project1_release/
├── README.md
├── option_A/
│ ├── filters.py
│ ├── option_a.ipynb
│ └── option_a_exploration.ipynb
├── option_B/
│ ├── edge.py
│ ├── images/
│ │ ├── gt/
│ │ │ ├── 101.pgm.gtf.pgm
│ │ │ ├── 103.pgm.gtf.pgm
│ │ │ ├── 104.pgm.gtf.pgm
│ │ │ ├── 105.pgm.gtf.pgm
│ │ │ ├── 106.pgm.gtf.pgm
│ │ │ ├── 108.pgm.gtf.pgm
│ │ │ ├── 109.pgm.gtf.pgm
│ │ │ ├── 110.pgm.gtf.pgm
│ │ │ ├── 111.pgm.gtf.pgm
│ │ │ ├── 125.pgm.gtf.pgm
│ │ │ ├── 126.pgm.gtf.pgm
│ │ │ ├── 130.pgm.gtf.pgm
│ │ │ ├── 131.pgm.gtf.pgm
│ │ │ ├── 132.pgm.gtf.pgm
│ │ │ ├── 133.pgm.gtf.pgm
│ │ │ ├── 134.pgm.gtf.pgm
│ │ │ ├── 137.pgm.gtf.pgm
│ │ │ ├── 138.pgm.gtf.pgm
│ │ │ ├── 143.pgm.gtf.pgm
│ │ │ ├── 144.pgm.gtf.pgm
│ │ │ ├── 146.pgm.gtf.pgm
│ │ │ ├── 202.pgm.gtf.pgm
│ │ │ ├── 203.pgm.gtf.pgm
│ │ │ ├── 204.pgm.gtf.pgm
│ │ │ ├── 207.pgm.gtf.pgm
│ │ │ ├── 214.pgm.gtf.pgm
│ │ │ ├── 215.pgm.gtf.pgm
│ │ │ ├── 217.pgm.gtf.pgm
│ │ │ ├── 218.pgm.gtf.pgm
│ │ │ ├── 220.pgm.gtf.pgm
│ │ │ ├── 221.pgm.gtf.pgm
│ │ │ ├── 223.pgm.gtf.pgm
│ │ │ ├── 36.pgm.gtf.pgm
│ │ │ ├── 43.pgm.gtf.pgm
│ │ │ ├── 47.pgm.gtf.pgm
│ │ │ ├── 48.pgm.gtf.pgm
│ │ │ ├── 50.pgm.gtf.pgm
│ │ │ ├── 56.pgm.gtf.pgm
│ │ │ ├── 61.pgm.gtf.pgm
│ │ │ └── 62.pgm.gtf.pgm
│ │ └── objects/
│ │ ├── 101.pgm
│ │ ├── 103.pgm
│ │ ├── 104.pgm
│ │ ├── 105.pgm
│ │ ├── 106.pgm
│ │ ├── 108.pgm
│ │ ├── 109.pgm
│ │ ├── 110.pgm
│ │ ├── 111.pgm
│ │ ├── 125.pgm
│ │ ├── 126.pgm
│ │ ├── 130.pgm
│ │ ├── 131.pgm
│ │ ├── 132.pgm
│ │ ├── 133.pgm
│ │ ├── 134.pgm
│ │ ├── 137.pgm
│ │ ├── 138.pgm
│ │ ├── 143.pgm
│ │ ├── 144.pgm
│ │ ├── 146.pgm
│ │ ├── 202.pgm
│ │ ├── 203.pgm
│ │ ├── 204.pgm
│ │ ├── 207.pgm
│ │ ├── 214.pgm
│ │ ├── 215.pgm
│ │ ├── 217.pgm
│ │ ├── 218.pgm
│ │ ├── 220.pgm
│ │ ├── 221.pgm
│ │ ├── 223.pgm
│ │ ├── 36.pgm
│ │ ├── 43.pgm
│ │ ├── 47.pgm
│ │ ├── 48.pgm
│ │ ├── 50.pgm
│ │ ├── 56.pgm
│ │ ├── 61.pgm
│ │ └── 62.pgm
│ ├── option_b.ipynb
│ ├── option_b_exploration.ipynb
│ └── references/
│ ├── iguana_canny.npy
│ ├── iguana_edge_tracking.npy
│ ├── iguana_non_max_suppressed.npy
│ └── iguana_non_max_suppressed.png.npy
└── option_C/
├── option_c.ipynb
└── option_c_exploration.ipynb
SYMBOL INDEX (17 symbols across 2 files) FILE: spring_2026/project1_release/option_A/filters.py function conv_nested (line 13) | def conv_nested(image, kernel): function zero_pad (line 37) | def zero_pad(image, pad_height, pad_width): function conv_fast (line 64) | def conv_fast(image, kernel): function cross_correlation (line 93) | def cross_correlation(f, g): function zero_mean_cross_correlation (line 113) | def zero_mean_cross_correlation(f, g): function normalized_cross_correlation (line 135) | def normalized_cross_correlation(f, g): FILE: spring_2026/project1_release/option_B/edge.py function conv (line 12) | def conv(image, kernel): function gaussian_kernel (line 44) | def gaussian_kernel(size, sigma): function partial_x (line 69) | def partial_x(img): function partial_y (line 89) | def partial_y(img): function gradient (line 109) | def gradient(img): function non_maximum_suppression (line 134) | def non_maximum_suppression(G, theta): function double_thresholding (line 161) | def double_thresholding(img, high, low): function get_neighbors (line 187) | def get_neighbors(y, x, H, W): function link_edges (line 214) | def link_edges(strong_edges, weak_edges): function canny (line 245) | def canny(img, kernel_size=5, sigma=1.4, high=20, low=15): function hough_transform (line 264) | def hough_transform(img):
Copy disabled (too large)
Download .json
Condensed preview — 100 files, each showing path, character count, and a content snippet. Download the .json file for the full structured content (12,245K chars).
[
{
"path": ".gitignore",
"chars": 269,
"preview": ".DS_STORE\n*.ipynb_checkpoints*\n*__pycache__*\nwinter_2024/project2_release/option_C/option_c_sol.ipynb\nwinter_2024/projec"
},
{
"path": "LICENSE",
"chars": 1274,
"preview": "COPYRIGHT\n\nCopyright (c) 2018, Ranjay Krishna.\nAll rights reserved.\n\nEach contributor holds copyright over their respect"
},
{
"path": "README.md",
"chars": 1082,
"preview": "# CS131: Computer Vision Foundations and Applications\n\nThis repository contains the released assignments for the [fall 2"
},
{
"path": "spring_2026/README.md",
"chars": 145,
"preview": "# CS131 Spring 2026 Homework\n\nHomework 0 (basics): 10%\nHomework 1: 10%\nMini-project 1: 15%\nHomework 2: 10%\nMini-Project "
},
{
"path": "spring_2026/hw0_release/README.md",
"chars": 520,
"preview": "# Homework 0\n\nOpen the [`hw0.ipynb` colab notebook](https://colab.research.google.com/drive/11-yXOjx04ydgp5OK_16HWoCtXBW"
},
{
"path": "spring_2026/hw0_release/hw0.ipynb",
"chars": 39908,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"YbbOGOmmvF_L\"\n },\n \"sou"
},
{
"path": "spring_2026/hw0_release/requirements.txt",
"chars": 651,
"preview": "Jinja2>=2.11\nMarkupSafe>=2.0\nPillow\nPyWavelets\nPygments>=2.7\nappnope>=0.1\nastroid\nbleach>=3.2\ncycler>=0.10\ndecorator>=4."
},
{
"path": "spring_2026/project1_release/README.md",
"chars": 3305,
"preview": "# Project 1 Guidelines \n\nCS131, Spring 2026\n\nThis project asks you to choose a concept from the selection below and expa"
},
{
"path": "spring_2026/project1_release/option_A/filters.py",
"chars": 3717,
"preview": "\"\"\"\nCS131 - Computer Vision: Foundations and Applications\nProject 2 Option A\nAuthor: Donsuk Lee (donlee90@stanford.edu)\n"
},
{
"path": "spring_2026/project1_release/option_A/option_a.ipynb",
"chars": 26425,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"# Project 1 - Option A\\n\",\n \"*Th"
},
{
"path": "spring_2026/project1_release/option_A/option_a_exploration.ipynb",
"chars": 3273,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"# Option A Exploration\\n\",\n \"\\n\""
},
{
"path": "spring_2026/project1_release/option_B/edge.py",
"chars": 7861,
"preview": "\"\"\"\nCS131 - Computer Vision: Foundations and Applications\nProject 2 Option B\nAuthor: Donsuk Lee (donlee90@stanford.edu)\n"
},
{
"path": "spring_2026/project1_release/option_B/images/gt/105.pgm.gtf.pgm",
"chars": 78265,
"preview": "P5\n640 473\n255\n\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005"
},
{
"path": "spring_2026/project1_release/option_B/images/gt/106.pgm.gtf.pgm",
"chars": 71945,
"preview": "P5\n577 435\n255\n\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005"
},
{
"path": "spring_2026/project1_release/option_B/images/gt/126.pgm.gtf.pgm",
"chars": 48863,
"preview": "P5\n531 353\n255\n\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005"
},
{
"path": "spring_2026/project1_release/option_B/images/gt/131.pgm.gtf.pgm",
"chars": 139909,
"preview": "P5\n567 481\n255\n\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005"
},
{
"path": "spring_2026/project1_release/option_B/images/gt/134.pgm.gtf.pgm",
"chars": 86652,
"preview": "P5\n567 503\n255\n\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005"
},
{
"path": "spring_2026/project1_release/option_B/images/gt/137.pgm.gtf.pgm",
"chars": 100419,
"preview": "P5\n351 590\n255\n\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005"
},
{
"path": "spring_2026/project1_release/option_B/images/gt/203.pgm.gtf.pgm",
"chars": 129910,
"preview": "P5\n512 512\n255\n\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005"
},
{
"path": "spring_2026/project1_release/option_B/images/gt/220.pgm.gtf.pgm",
"chars": 165276,
"preview": "P5\n512 512\n255\n\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005"
},
{
"path": "spring_2026/project1_release/option_B/images/gt/223.pgm.gtf.pgm",
"chars": 47795,
"preview": "P5\n512 456\n255\n\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005\u0005"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/101.pgm",
"chars": 60281,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n# CREATOR: XV Version 3.00 Rev: 3/"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/103.pgm",
"chars": 108723,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n575 484\n255\nܽeSwo^]^abdeedhheadd^Y^^Z^^^`^^][[[`[]]][]Y[[V[YYY[]YWYYWYWYY[YY"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/104.pgm",
"chars": 205221,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n610 473\n255\n~mhos~sqǼƼimsu|qsszwhoouquskzwqkuzyowofdfWFIH>;ICCC;C=8CCA>FIDIH"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/105.pgm",
"chars": 229738,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n640 473\n255\n}zz}wzzuoxwotohhhjghbdl]a_a\\YZZVVZVYY_YYZ\\]Z\\ZZ\\]]]Z_abba\\\\"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/106.pgm",
"chars": 164273,
"preview": "P5\n577 435\n255\n¹̾rojb_bd`__d`de_`eebddgddghgdhelhhjljoqw~ƿ{ty~yvroomlmjjljllljlmlooqojgjjggddde`_`]]]\\]]U\\]ZZXXRRTUTRRRQ"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/108.pgm",
"chars": 190116,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n652 503\n255\nHHBBDEBEAKILKFEFA;BLHFBADDA?9?A;=B>DEDIAFDD?DEDEIEEEA=9;AFFAA>BD"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/109.pgm",
"chars": 145837,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n689 489\n255\nݰĿȉǿ͇႑½͇~Ԑyû֚|ؤ|~y~ĿزyРtmp{~wpwwvttfTvżۿ|pyܨciw|y~{ryyyykiw^Ƚ{f"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/110.pgm",
"chars": 245797,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n# CREATOR: XV Version 3.00 Rev: 3/"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/111.pgm",
"chars": 76653,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n593 505\n255\n˽ʾj_q|~|s|~yyjhl{|yp|~~|~~y{||{u~|w|~|~žǾǰs|wplc^pw~ypanyjh{|~~~"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/125.pgm",
"chars": 244410,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n646 462\n255\nKNQNQQNQNNQNQQQQQSSSSVUUUSQSSSUSSUUUXVY[^^``hjhhlglocUNHNPMKKNVS"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/126.pgm",
"chars": 76926,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n531 353\n255\nȿĿ{{{{yuuuutttpuuwttruuuwwy}{y}{}}{}{{yyy}wuyuwrwwpptr"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/130.pgm",
"chars": 52931,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n396 508\n255\nݽ}ZR`]fl|rww|t_irzvojottvyjfdpvwt|vsmtsrrrostrrrlljjcd`_cf`aYV"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/131.pgm",
"chars": 81610,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n567 481\n255\nľÿĿ꯭þþĿĿþĿĿꩯľ걯ľľ찯ĿĻĿ갯ÿ髯ÿþžĿž譫ſþ뱰꯭þž詭髯ÿ멩謫ľꭰûꫩž鰬魯Ŀꯧſꩨƿ쬭ꬫþþ髫þþ语ÿ"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/132.pgm",
"chars": 118980,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n634 418\n255\nqqmtqqtrrrrrtytqvvvqqjfV^[^YY^YYVTNKTSPMPƏ}xtqecmrtoqlovxy{{xolj"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/133.pgm",
"chars": 27981,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n585 511\n255\nȿľ¿¾Ŀĺ¿¾Ǽ¾ĿĺºľżĹž¼yt{¹¿¿ž¼Ŀżºĺ¿}}¿¿¿¿¿¼º}}}º¼¿¿¿¼·}}{{{"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/134.pgm",
"chars": 180850,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n567 503\n255\nVYVVTTVYYYVWTRNNNOQQORQLNNQRRTRNRRW]]bdkgda\\RLKE==7431441479::::"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/137.pgm",
"chars": 78968,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n# CREATOR: XV Version 3.00 Rev: 3/"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/138.pgm",
"chars": 118914,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n449 456\n255\nxiXK;75746665457697477867;7797858887578987788;9989;;;;<?;99><>:<"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/143.pgm",
"chars": 85360,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n478 488\n255\n}}qjŨ{}{zzzz{{à}vsu{zS#\u0019\u001c\u001e\u001f\u001f\u001b\u001b\u001f\u001e\u001c\u001c\u001c\u001e\u001c\u001e\u001c\u001c\u001c\u001doɹ^ujeǻ͙`[`c^^^^c``ee"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/144.pgm",
"chars": 137417,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n611 488\n255\nĿ¿¾¾ļ¿¿Ŀļ¾Ļ¿¿»¾¾ľǼľ¼¼ľļvbM;.+/18?FG@SVF850222....0.,,/*--/-,-,-*"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/146.pgm",
"chars": 94023,
"preview": "P5\n# CREATOR: XV Version 3.00 Rev: 3/30/93\n576 493\n255\nݞ]UݥfYmYkY߳iXݸn\\߽n`n^rcwcyyauyfnzfj}eeՂecibebّi`ݕibޛc^ޠi^ޤj]ި"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/202.pgm",
"chars": 198299,
"preview": "P5\n540 512\n255\nx[leB#$#8fv\\@TnYI>I><H>{kZ_5dCO+$1B224+,25B63S3..023DMZR4c:/:P:'%%!\u001c)=/Wa:}d+B>VuD' AwE->wsr{R?/$']HPLQM"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/203.pgm",
"chars": 230288,
"preview": "P5\n512 512\n255\n02/03/230032232321520331/022132234254369:668946556774669793443666666977::88;;7889:88;==:<:;::::::;;;<;:=;"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/204.pgm",
"chars": 133996,
"preview": "P5\n512 512\n255\nŸйrmҬctaj<NRZmcx{lcfm|^`ZOOTWWX_kaTY\\blwvjruzx_}˽ݶ֞tgc^Yb_f`df`bccewxtxpí̫ν͗ӯp\\kc[`w{xxywt[ǪωhajR?MXMZqp"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/207.pgm",
"chars": 149715,
"preview": "P5\n512 512\n255\n᳆fL;.$\u001f\u001b\u0018\u0019\u001b *5>Sfzv\\F7+\u001e\u0015\u0015\u0013\u0013\u0014\u0016\u001f)6DQeyzcPA5(\u001c\u0014\u0011\u000f\u0010\u0012\u0019 *8AQcsȳtcRA2'\u001d\u0014\u0013\u000f\u0012\u0014\u0018\u001f%3?J^sƴydQB2$\u001d\u0014\u000f\u000b\u0010\u0013\u001a\u001f*6BM_sųnWG6)"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/214.pgm",
"chars": 67053,
"preview": "P5\n512 438\n255\nƾ¾˿Źƺƿйýŷ»Ϫþȥýƺƽ{}Ķyz̾zu~´ûu~ȵ{~z{v~÷}w~ɼ~Ǹ~y{ut¿{uÿ}v~zxǹz´ѼԸ̺~̷´¿ɳzlȻ~¿jquƾinn|½¿Ƕg{p¿½¿thxx¿¿¿htw¿¿¿"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/215.pgm",
"chars": 129735,
"preview": "P5\n512 468\n255\n}vkං|rib^TRWuwgabZ^dbd`eeedeeeeghhbbnvkZWbggnnrs|{b`e`^||m^Ufdri^{mi^WSRSPOQSNSPSTRSTSUUTSTUTXVTTVTUYVTS"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/217.pgm",
"chars": 64525,
"preview": "P5\n512 468\n255\n{vvvvwwvwzz{xwyx{{{xwxxxy{wyztwyw{}yy}}~~}~|ux{w{nu}~}}~~~~~~~}~~}}~~|~}}}{|z{{}{{w{{z{{zzzv{zz{vv"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/218.pgm",
"chars": 178834,
"preview": "P5\n552 468\n255\nkmjkkkmeemjkL6>894<9;;8899899999<989;<9988;;<888<99<<<999<999<88;8>;;?E=;7O[Z\\Z\\^XG[QBN`WNV[_b_`g^LT`[`eb"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/220.pgm",
"chars": 57813,
"preview": "P5\n512 512\n255\nzvqoqomoqqskonopnmnhjusskotuonnnonnnnpopkoptooknqonpoknonnotzztooooustqooqlospoouurpoqstxwutnpinjjoihgeeg"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/221.pgm",
"chars": 111371,
"preview": "P5\n572 512\n255\n~w¿Սwùƾيxw¼א{ڋڋ|ތy¾wƿwſᎎwǿz¼܌}ÿ⌌{ÿ⊌zጎz勎y¼猎~ÿ后}王y⎋wڎz¾ݐvſ匏y鉎{¼⋋|¾ᎏx㌍xž匍{ľ㎎{㎑}¿}㈌|䎎zǿދ{ļߋ{ދ~Ľᐊ|½ܐyߓx攋"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/223.pgm",
"chars": 94687,
"preview": "P5\n512 456\n255\nǶžÿýĿ¿ȿø¿Ľ¿±ɿž¾Ľ½ĽĻǼ½ƾľĿǿĴÿ¿DZ˿Ŀ¹Ŀƴ¸¿¿¿ýŸǿĿýĸĺÿĽɿ¿ղʽýÿմ¶ľ¿¾¿½½ڵ¹¼ÿ½ĽĽý䱲ƽ¾½þĿü宲ù¾¿º¿ľ崴¿¼굵ÿÿ¾ľĿſȾȾþ½ƾ¾¿þĽ¿"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/36.pgm",
"chars": 193810,
"preview": "P5\n577 419\n255\n36693200,.2/44165/27:3/33.4:7420,+2449>/527><58;9578559;7<<>;9@AA=;=:FRLLPTRPPPQTPMJRTPOOMNPNNONNMLIQLOQN"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/43.pgm",
"chars": 150505,
"preview": "P5\n620 462\n255\n|}~{~{}}~zvx}xn[F2011--0+*&01),',0,)%'#'-*%.---,'-00+##(.)-3,,')*%(.+)+2.&-#05'**+..%)0.'+((*'+--$($*&"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/47.pgm",
"chars": 100618,
"preview": "P5\n# CREATOR: XV Version 3.10 Rev: 12/16/94\n367 417\n255\nwswtquusuxxxxwx{{yxxww{xvyvvx{wsw{ttssxxx{{usqqwxutwxvnqw{{swx{"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/48.pgm",
"chars": 173041,
"preview": "P5\n539 433\n255\n;DCFIQ_lrz||{uqwݼm]bzא~תp¿¼tojhizvyoswŲz`J<8@Sbe[W\\\\\\XZ\\YY\\Z\\_Z\\YUZZV\\ZW^^V\\Y^^^[Y]]bb]]\\[Ybfghfdedfffiif"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/50.pgm",
"chars": 170273,
"preview": "P5\n620 425\n255\nƽ¶þüÿǻ|}wvzvokoolkikhdaghjcȰƼӿ{wrpsnut{쪒gDz|ppptqpqptrqqlppoorqkjlllihkijjkkmlkiklkiiikloiklkkhnlkhihhm"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/56.pgm",
"chars": 124777,
"preview": "P5\n538 416\n255\n@?BVVVUPNU\\T_VHKRVhuRSMf]L^MHEAKNTLC?>>RO^uHVԔziԄd`kiqm}znr}ut|wb\\Woyhyw}my~tlv}sﳠnzvys~ό}{ܜHFHRYebT"
},
{
"path": "spring_2026/project1_release/option_B/images/objects/61.pgm",
"chars": 212064,
"preview": "P5\n615 451\n255\n!!\u001f\u001f\u001f\u001f\u001e\u001f\"\u001b\u001e\u001c\u001c\u001f\u001f\u001e 62442276642124121154441//.............../........................../........../........."
},
{
"path": "spring_2026/project1_release/option_B/images/objects/62.pgm",
"chars": 212374,
"preview": "P5\n619 461\n255\nƾýǯtmj`_[\\JD?<9671(00.1+*.*..'&,.+++-+/-/--.//..3.--.0./././-.-**),-*-1Fgؿ϶r`hھeK??>:/)*0.-/0.36332-30//2"
},
{
"path": "spring_2026/project1_release/option_B/option_b.ipynb",
"chars": 26767,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"# Project 1 - Option B\\n\",\n \"*Th"
},
{
"path": "spring_2026/project1_release/option_B/option_b_exploration.ipynb",
"chars": 3422,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"# Option B Exploration\\n\",\n \"\\n\""
},
{
"path": "spring_2026/project1_release/option_C/option_c.ipynb",
"chars": 29955,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"829eeee5-b288-4240-b668-187f7be574e8\",\n \"metadata\": {},\n \"so"
},
{
"path": "spring_2026/project1_release/option_C/option_c_exploration.ipynb",
"chars": 1425,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n \"# Option C Exploration\\n\",\n \"\\n\""
}
]
// ... and 35 more files (download for full content)
About this extraction
This page contains the full source code of the StanfordVL/CS131_release GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 100 files (6.2 MB), approximately 1.6M tokens, and a symbol index with 17 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.