handout/ 000755 000765 000024 00000000000 13521707514 012652 5 ustar 00work staff 000000 000000 handout/images/ 000755 000765 000024 00000000000 13521640410 014106 5 ustar 00work staff 000000 000000 handout/hw0.ipynb 000644 000765 000024 00000232602 13521704463 014420 0 ustar 00work staff 000000 000000 {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Homework 0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Throughout this course, we will heavily rely on the NumPy and PyTorch libraries.\n",
"\n",
"The first library of interest is NumPy.\n",
"\n",
"> NumPy is the fundamental package for scientific computing with Python. It contains among other things:\n",
">\n",
"> - a powerful N-dimensional array object\n",
"> - sophisticated (broadcasting) functions\n",
"> - tools for integrating C/C++ and Fortran code\n",
"> - useful linear algebra, Fourier transform, and random number capabilities\n",
"> \n",
"> Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases.\n",
">\n",
"> —*[About NumPy](http://www.numpy.org/)*\n",
"\n",
"The second library of interest is PyTorch.\n",
"> PyTorch is an open source deep learning platform that provides a seamless path from research prototyping to production deployment.\n",
"> - *Hybrid Front-End:* A new hybrid front-end seamlessly transitions between eager mode and graph mode to provide both flexibility and speed.\n",
"> - *Distributed Training:* Scalable distributed training and performance optimization in research and production is enabled by the torch.distributed backend.\n",
"> - *Python-First:* Deep integration into Python allows popular libraries and packages to be used for easily writing neural network layers in Python.\n",
"> - *Tools & Libraries:* A rich ecosystem of tools and libraries extends PyTorch and supports development in computer vision, NLP and more.\n",
">\n",
"> —*[About PyTorch](https://pytorch.org/)*\n",
"\n",
"One consideration as to why we are using PyTorch is most succinctly summerized by Andrej Karpathy, Director of Artificial Intelligence and Autopilot Vision at Tesla. The technical summary can be found [here](https://twitter.com/karpathy/status/868178954032513024?lang=en)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import os\n",
"import time\n",
"import torch"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Vectorization\n",
"\n",
"Lists are a foundational data structure in Python, allowing us to create simple and complex algorithms to solve problems. However, in mathematics and particularly in linear algebra, we work with vectors and matrices to model problems and create statistical solutions. Through these exercises, we will begin introducing you to how to think more mathematically through the use of NumPy by starting with a process known as vectorization.\n",
"\n",
"Index chasing is a very valuable skill, and certainly one you will need in this course, but mathematical problems often have simpler and more efficient representations that use vectors. The process of converting from an implimentation that uses indicies to one that uses vectors is known as vectorization. Once vectorized, the resulting implimentation often yields to the user faster and more readable code than before.\n",
"\n",
"In the following problems, we will ask you to practice reading mathematical expressions and deduce their vectorized equivalent along with their implimentation in Python. You will use the NumPy array object as the Python equivalent to a vector, and in later sections you will work with sets of vectors known as matrices."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.1 Dot Product\n",
"\n",
"In this task, you will implement the dot product function for numpy arrays.\n",
"\n",
"The dot product (also known as the scalar product or inner product) is the linear combination of the n real components of two vectors.\n",
"\n",
"$$x \\cdot y = x_1 y_1 + x_2 y_2 + \\cdots + x_n y_n$$\n",
"\n",
"**Your Task**: Implement the function `dot`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"def inefficient_dot(x, y):\n",
" \"\"\"\n",
" Inefficient dot product of two arrays.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-dimensional numpy array.\n",
" y (numpy.ndarray): 1-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.int64: scalar quantity.\n",
" \"\"\" \n",
" assert(len(x) == len(y))\n",
" \n",
" result = 0\n",
" for i in range(len(x)):\n",
" result += x[i]*y[i]\n",
" \n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"def dot(x, y):\n",
" \"\"\"\n",
" Dot product of two arrays.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-dimensional numpy array.\n",
" y (numpy.ndarray): 1-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.int64: scalar quantity.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NotImplemented\n"
]
}
],
"source": [
"np.random.seed(0)\n",
"X = np.random.randint(-1000, 1000, size=3000)\n",
"Y = np.random.randint(-1000, 1000, size=3000)\n",
"\n",
"print(dot(X,Y))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"
\n",
" \n",
" dot(X,Y) | \n",
" 7082791 | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.2 Outer Product\n",
"\n",
"In this task, you will implement the outer product function for numpy arrays.\n",
"\n",
"The outer product (also known as the tensor product) of vectors x and y is defined as\n",
"\n",
"$$\n",
"x \\otimes y =\n",
"\\begin{bmatrix}\n",
"x_1 y_1 & x_1 y_2 & … & x_1 y_n\\\\\n",
"x_2 y_1 & x_2 y_2 & … & x_2 y_n\\\\\n",
"⋮ & ⋮ & ⋱ & ⋮ \\\\\n",
"x_m y_1 & x_m y_2 & … & x_m y_n\n",
"\\end{bmatrix}\n",
"$$\n",
"\n",
"**Your Task**: Implement the function `outer`.\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"def inefficient_outer(x, y):\n",
" \"\"\"\n",
" Inefficiently compute the outer product of two vectors.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-dimensional numpy array.\n",
" y (numpy.ndarray): 1-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.ndarray: 2-dimensional numpy array.\n",
" \"\"\"\n",
" result = np.zeros((len(x), len(y))) \n",
" for i in range(len(x)):\n",
" for j in range(len(y)):\n",
" result[i, j] = x[i]*y[j]\n",
" \n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"def outer(x, y):\n",
" \"\"\"\n",
" Compute the outer product of two vectors.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-dimensional numpy array.\n",
" y (numpy.ndarray): 1-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.ndarray: 2-dimensional numpy array.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NotImplemented\n"
]
}
],
"source": [
"np.random.seed(0)\n",
"X = np.random.randint(-1000, 1000, size=3000)\n",
"Y = np.random.randint(-1000, 1000, size=3000)\n",
"\n",
"print(outer(X,Y))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"\n",
" \n",
" outer(X,Y) | \n",
" \n",
" [[ 59092 -144096 136512 ... -53088 -86268 53404] \n",
" [ 82467 -201096 190512 ... -74088 -120393 74529] \n",
" [-122111 297768 -282096 ... 109704 178269 -110357] \n",
" ... \n",
" [-144551 352488 -333936 ... 129864 211029 -130637] \n",
" [-179707 438216 -415152 ... 161448 262353 -162409] \n",
" [ 88825 -216600 205200 ... -79800 -129675 80275]] \n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.3 Hadamard Product\n",
"\n",
"In this task, you will implement the Hadamard product function, `multiply`, for numpy arrays.\n",
"\n",
"The Hadamard product (also known as the Schur product or entrywise product) of vectors x and y is defined as\n",
"\n",
"$$\n",
"x \\circ y =\n",
"\\begin{bmatrix}\n",
"x_{1} y_{1} & x_{2} y_{2} & … & x_{n} y_{n}\n",
"\\end{bmatrix}\n",
"$$\n",
"\n",
"**Your Task**: Implement the function `multiply`."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"def inefficient_multiply(x, y):\n",
" \"\"\"\n",
" Inefficiently multiply arguments element-wise.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-dimensional numpy array.\n",
" y (numpy.ndarray): 1-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.ndarray: 1-dimensional numpy array.\n",
" \"\"\"\n",
" assert(len(x) == len(y))\n",
" \n",
" result = np.zeros(len(x))\n",
" for i in range(len(x)):\n",
" result[i] = x[i]*y[i]\n",
" \n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"def multiply(x, y):\n",
" \"\"\"\n",
" Multiply arguments element-wise.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-dimensional numpy array.\n",
" y (numpy.ndarray): 1-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.ndarray: 1-dimensional numpy array.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NotImplemented\n"
]
}
],
"source": [
"np.random.seed(0)\n",
"X = np.random.randint(-1000, 1000, size=3000)\n",
"Y = np.random.randint(-1000, 1000, size=3000)\n",
"\n",
"print(multiply(X,Y))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"\n",
" \n",
" multiply(X,Y) | \n",
" \n",
" [ 59092 -201096 -282096 ... 129864 262353 80275]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.4 Sum-Product\n",
"In this task, you will implement the sum-product function for numpy arrays.\n",
"\n",
"The sum-product of vectors x and y, each with n real component, is defined as \n",
"\n",
"$$\n",
"f(x, y) = \n",
"{\n",
"\\begin{bmatrix}\n",
"1\\\\\n",
"1\\\\\n",
"⋮\\\\\n",
"1\n",
"\\end{bmatrix}^{\\;T}\n",
"%\n",
"\\begin{bmatrix}\n",
"x_1 y_1 & x_1 y_2 & … & x_1 y_n\\\\\n",
"x_2 y_1 & x_2 y_2 & … & x_2 y_n\\\\\n",
"⋮ & ⋮ & ⋱ & ⋮ \\\\\n",
"x_m y_1 & x_m y_2 & … & x_m y_n\n",
"\\end{bmatrix}\n",
"%\n",
"\\begin{bmatrix}\n",
"1\\\\\n",
"1\\\\\n",
"⋮\\\\\n",
"1\n",
"\\end{bmatrix}\n",
"} = \n",
"\\displaystyle\\sum_{i=1}^{n} \\displaystyle\\sum_{j=1}^{n} x_i \\cdot y_j\n",
"$$\n",
"\n",
"**Your Task**: Implement the function `sumproduct`.\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"def inefficient_sumproduct(x, y):\n",
" \"\"\"\n",
" Inefficiently sum over all the dimensions of the outer product \n",
" of two vectors.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-dimensional numpy array.\n",
" y (numpy.ndarray): 1-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.int64: scalar quantity.\n",
" \"\"\"\n",
" assert(len(x) == len(y))\n",
" \n",
" result = 0\n",
" for i in range(len(x)):\n",
" for j in range(len(y)):\n",
" result += x[i] * y[j]\n",
" \n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"def sumproduct(x, y):\n",
" \"\"\"\n",
" Sum over all the dimensions of the outer product of two vectors.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-dimensional numpy array.\n",
" y (numpy.ndarray): 1-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.int64: scalar quantity.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NotImplemented\n"
]
}
],
"source": [
"np.random.seed(0)\n",
"X = np.random.randint(-1000, 1000, size=3000)\n",
"Y = np.random.randint(-1000, 1000, size=3000)\n",
"\n",
"print(sumproduct(X,Y))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"\n",
" \n",
" sumproduct(X,Y) | \n",
" 265421520 | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.5 ReLU\n",
"\n",
"In this task, you will implement the ReLU activation function for numpy arrays.\n",
"\n",
"The ReLU activation (also known as the rectifier or rectified linear unit) matrix Z resulting from applying the ReLU function to matrix X is defined such that for $X,Z \\in M_{m \\times n} (\\mathbb{R})$, \n",
"\n",
"$$Z = {\\tt ReLU}(X) \\implies \\begin{cases}z_{ij} = x_{ij}&{\\mbox{if }}x_{ij}>0\\\\z_{ij} = 0&{\\mbox{otherwise.}}\\end{cases}$$\n",
"\n",
"For reference, it is common to use the notation $X = (x_{ij})$ and $Z = (z_{ij})$.\n",
"\n",
"**Your Task:** Implement the function `ReLU`."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"def inefficient_ReLU(x):\n",
" \"\"\"\n",
" Inefficiently applies the rectified linear unit function \n",
" element-wise.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 2-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.ndarray: 2-dimensional numpy array.\n",
" \"\"\"\n",
" result = np.copy(x)\n",
" for i in range(x.shape[0]):\n",
" for j in range(x.shape[1]):\n",
" if x[i][j] < 0:\n",
" result[i][j] = 0\n",
" \n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"def ReLU(x):\n",
" \"\"\"\n",
" Applies the rectified linear unit function element-wise.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 2-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.ndarray: 2-dimensional numpy array.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NotImplemented\n"
]
}
],
"source": [
"np.random.seed(0)\n",
"X = np.random.randint(-1000, 1000, size=(3000,3000))\n",
"\n",
"print(ReLU(X))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"\n",
" \n",
" ReLU(X) | \n",
" \n",
" [[ 0 0 653 ... 773 961 0] \n",
" [ 0 456 0 ... 168 273 0] \n",
" [936 475 0 ... 408 0 0] \n",
" ... \n",
" [ 0 396 457 ... 646 0 0] \n",
" [645 943 0 ... 863 0 790] \n",
" [641 0 379 ... 347 0 0]]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.6 Prime ReLU (derivative of ReLU)\n",
"\n",
"In this task, you will implement the derivative of the ReLU activation function for numpy arrays.\n",
"\n",
"The derivative of the ReLU activation matrix Z resulting from applying the derivative of the ReLU function to matrix X is defined such that for $X,Z \\in M_{m \\times n} (\\mathbb{R})$, \n",
"\n",
"$$Z = {\\tt PrimeReLU}(X) \\implies \\begin{cases}z_{ij} = \\frac{d}{dx_{ij}} (x_{ij})&{\\mbox{if }}x_{ij}> 0\\\\z_{ij} = \\frac{d}{dx_{ij}} (0)&{\\mbox{otherwise.}}\\end{cases}$$\n",
"\n",
"For reference, it is common to use the notation $X = (x_{ij})$ and $Z = (z_{ij})$.\n",
"\n",
"**Your Task:** Implement the function `PrimeReLU`."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"def inefficient_PrimeReLU(x):\n",
" \"\"\"\n",
" Inefficiently applies the derivative of the rectified linear unit \n",
" function element-wise.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 2-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.ndarray: 2-dimensional numpy array.\n",
" \"\"\"\n",
"\n",
" result = np.copy(x)\n",
" for i in range(x.shape[0]):\n",
" for j in range(x.shape[1]):\n",
" if x[i][j] < 0:\n",
" result[i][j] = 0\n",
" else:\n",
" result[i][j] = 1\n",
" \n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"def PrimeReLU(x):\n",
" \"\"\"\n",
" Applies the derivative of the rectified linear unit function \n",
" element-wise.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 2-dimensional numpy array.\n",
"\n",
" Returns: \n",
" numpy.ndarray: 2-dimensional numpy array.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NotImplemented\n"
]
}
],
"source": [
"np.random.seed(0)\n",
"X = np.random.randint(-1000, 1000, size=(3000,3000))\n",
"\n",
"print(PrimeReLU(X))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"\n",
" \n",
" PrimeReLU(X) | \n",
" \n",
" [[0 0 1 ... 1 1 0] \n",
" [0 1 0 ... 1 1 0] \n",
" [1 1 0 ... 1 0 0] \n",
" ... \n",
" [0 1 1 ... 1 0 0] \n",
" [1 1 0 ... 1 0 1] \n",
" [1 0 1 ... 1 0 0]]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Pre-processing\n",
"When working with mathematical objects, there are often strict requirements on their contents such as dimension or set membership. This allows us to make proofs and conclusions about given objects, which motivate our mathematical models. However, data often not collected in a way that is consistent with our desired requirements.\n",
"\n",
"In this problem, you will be introduced to two types of data that will frequently be used in this course: univariate time-series data and multivariate time-series data. Your goal will be to process the given data such that its result is consistent with some requirements. Though the motivation behind these requirements may not yet be clear, they will, needless to say, become useful later in the course.\n",
"\n",
"For your implimentation, we encourage you to make use of NumPy to [slice](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html) and to [pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html) data."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"def get_data_1():\n",
" \"\"\"\n",
" This is the generating process from which example data 1 will derive\n",
" \n",
" Parameters: \n",
" None\n",
" \n",
" Returns: \n",
" numpy.ndarray: 1-d numpy array with 2-d numpy arrays as elements.\n",
" \"\"\"\n",
" freq000 = 3; freq001 = 1; freq002 = 4; freq003 = 1\n",
" freq010 = 5; freq011 = 9; freq012 = 2; freq013 = 6\n",
" freq020 = 5; freq021 = 3; freq022 = 5; freq023 = 8\n",
" frame00 = np.array([freq000, freq001, freq002, freq003])\n",
" frame01 = np.array([freq010, freq011, freq012, freq013])\n",
" frame02 = np.array([freq020, freq021, freq022, freq023])\n",
" utterance0 = np.array([frame00, frame01, frame02])\n",
"\n",
" freq100 = 9; freq101 = 7; freq102 = 9; freq103 = 3\n",
" freq110 = 2; freq111 = 3; freq112 = 8; freq113 = 4\n",
" frame10 = np.array([freq100, freq101, freq102, freq103])\n",
" frame11 = np.array([freq110, freq111, freq112, freq113])\n",
" utterance1 = np.array([frame10, frame11])\n",
"\n",
" freq200 = 6; freq201 = 2; freq202 = 6; freq203 = 4\n",
" freq210 = 3; freq211 = 3; freq212 = 8; freq213 = 3\n",
" freq220 = 2; freq221 = 7; freq222 = 9; freq223 = 5\n",
" freq230 = 0; freq231 = 2; freq232 = 8; freq233 = 8\n",
" frame20 = np.array([freq200, freq201, freq202, freq203])\n",
" frame21 = np.array([freq210, freq211, freq212, freq213])\n",
" frame22 = np.array([freq220, freq221, freq222, freq223])\n",
" frame23 = np.array([freq230, freq231, freq232, freq233])\n",
" utterance2 = np.array([frame20, frame21, frame22, frame23])\n",
"\n",
" spectrograms = np.array([utterance0, utterance1, utterance2])\n",
"\n",
" return spectrograms\n",
"\n",
"def get_data_2():\n",
" \"\"\"\n",
" This is the generating process from which example data 2 will derive\n",
" \n",
" Parameters: \n",
" None\n",
" \n",
" Returns: \n",
" numpy.ndarray: 1-d numpy array with 2-d numpy arrays as elements.\n",
" \"\"\"\n",
" np.random.seed(0)\n",
" recordings = np.random.randint(10)\n",
" durations = [np.random.randint(low=5, high=10) \n",
" for i in range(recordings)]\n",
" data = []\n",
" k = 40 # Given as fixed constant\n",
" for duration in durations:\n",
" data.append(np.random.randint(10, size=(duration, k)))\n",
" data = np.asarray(data)\n",
" return data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Example 1 Data:\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" \n",
" get_data_1() | \n",
" \n",
"[array([[3, 1, 4, 1], \n",
" [5, 9, 2, 6], \n",
" [5, 3, 5, 8]]) \n",
" array([[9, 7, 9, 3], \n",
" [2, 3, 8, 4]]) \n",
" array([[6, 2, 6, 4], \n",
" [3, 3, 8, 3], \n",
" [2, 7, 9, 5], \n",
" [0, 2, 8, 8]])]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.1 Slicing: Last Point\n",
"Takes one 3-dimensional array with the length of the output instances. Your task is to keep only the $m$ last frames for each instance in the dataset. \n",
"\n",
"To the extent that it is helpful, a formal description provided in the Appendix.\n",
"\n",
"**Your Task:** Implement the function `slice_last_point`."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"def slice_last_point(x, m):\n",
" \"\"\"\n",
" Takes one 3-dimensional array with the length of the output instances.\n",
" Your task is to keep only the last m points for each instances in \n",
" the dataset.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-d numpy array with 2-d numpy arrays as elements (n, ?, k). \n",
" m (int): The cutoff reference index in dimension 2.\n",
" \n",
" Returns: \n",
" numpy.ndarray: A 3-dimensional numpy array of shape (n, m, k)\n",
" \"\"\"\n",
" spectrograms = x\n",
" \n",
" # Input function dimension specification\n",
" assert(spectrograms.ndim == 1)\n",
" for utter in spectrograms:\n",
" assert(utter.ndim == 2)\n",
"\n",
" # Pre-define output function dimension specification\n",
" dim1 = spectrograms.shape[0] # n\n",
" dim2 = m # m\n",
" dim3 = spectrograms[0].shape[1] # k\n",
"\n",
" result = np.zeros((dim1,dim2,dim3))\n",
"\n",
" #### Start of your code ####\n",
"\n",
" \n",
" #### End of your code ####\n",
"\n",
" # Assert output function dimension specification\n",
" assert(result.shape[0] == dim1)\n",
" assert(result.shape[1] == dim2)\n",
" assert(result.shape[2] == dim3)\n",
" \n",
" return result"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Example 1:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]\n",
"\n",
" [[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]\n",
"\n",
" [[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]]\n"
]
}
],
"source": [
"spectrograms = get_data_1()\n",
"duration = 2\n",
"print(slice_last_point(spectrograms, duration))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Expected Output 1\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" \n",
" slice_last_point( spectrograms, duration) | \n",
" \n",
" [[[5. 9. 2. 6.] \n",
" [5. 3. 5. 8.]] \n",
"\n",
" [[9. 7. 9. 3.] \n",
" [2. 3. 8. 4.]] \n",
"\n",
" [[2. 7. 9. 5.] \n",
" [0. 2. 8. 8.]]] \n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Example 2:"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n"
]
}
],
"source": [
"data = get_data_2()\n",
"m = 5\n",
"print(slice_last_point(data, m)[1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output 2**: \n",
"\n",
" \n",
" slice_last_point( data, m)[1] | \n",
" \n",
"[[7. 2. 7. 1. 6. 5. 0. 0. 3. 1. 9. 9. 6. 6. 7. 8. 8. 7. 0. 8. 6. 8. 9. 8. \n",
" 3. 6. 1. 7. 4. 9. 2. 0. 8. 2. 7. 8. 4. 4. 1. 7.] \n",
" [6. 9. 4. 1. 5. 9. 7. 1. 3. 5. 7. 3. 6. 6. 7. 9. 1. 9. 6. 0. 3. 8. 4. 1. \n",
" 4. 5. 0. 3. 1. 4. 4. 4. 0. 0. 8. 4. 6. 9. 3. 3.] \n",
" [2. 1. 2. 1. 3. 4. 1. 1. 0. 7. 8. 4. 3. 5. 6. 3. 2. 9. 8. 1. 4. 0. 8. 3. \n",
" 9. 5. 5. 1. 7. 8. 6. 4. 7. 3. 5. 3. 6. 4. 7. 3.] \n",
" [0. 5. 9. 3. 7. 5. 5. 8. 0. 8. 3. 6. 9. 3. 2. 7. 0. 3. 0. 3. 6. 1. 9. 2. \n",
" 9. 4. 9. 1. 3. 2. 4. 9. 7. 4. 9. 4. 1. 2. 7. 2.] \n",
" [3. 9. 7. 6. 6. 2. 3. 6. 0. 8. 0. 7. 6. 5. 9. 6. 5. 2. 7. 1. 9. 2. 2. 5. \n",
" 6. 4. 2. 2. 1. 0. 9. 0. 2. 8. 3. 0. 8. 8. 1. 0.]]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.2 Slicing: Fixed Point\n",
"Takes one 3-dimensional array with the starting position and the length of the output instances. Your task is to slice the instances from the same starting position for the given length.\n",
"\n",
"To the extent that it is helpful, a formal description provided in the Appendix.\n",
"\n",
"**Your Task:** Implement the function `slice_fixed_point`."
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"def slice_fixed_point(x, s, m):\n",
" \"\"\"\n",
" Takes one 3-dimensional array with the starting position and the \n",
" length of the output instances. Your task is to slice the instances \n",
" from the same starting position for the given length.\n",
"\n",
" Parameters:\n",
" x (numpy.ndarray): 1-d numpy array with 2-d numpy arrays as elements (n, ?, k).\n",
" s (int): The starting reference index in dimension 2.\n",
" m (int): The cutoff reference index in dimension 2.\n",
" \n",
" Returns:\n",
" numpy.ndarray: A 3-dimensional int numpy array of shape (n, m-s, k)\n",
" \"\"\"\n",
" spectrograms = x\n",
" \n",
" # Input function dimension specification\n",
" assert(spectrograms.ndim == 1)\n",
" for utter in spectrograms:\n",
" assert(utter.ndim == 2)\n",
"\n",
" # Pre-define output function dimension specification\n",
" dim1 = spectrograms.shape[0] # n\n",
" dim2 = m-s # m-s\n",
" dim3 = spectrograms[0].shape[1] # k\n",
"\n",
" result = np.zeros((dim1,dim2,dim3))\n",
"\n",
" #### Start of your code ####\n",
"\n",
" \n",
" #### End of your code ####\n",
"\n",
" # Assert output function dimension specification\n",
" assert(result.shape[0] == dim1)\n",
" assert(result.shape[1] == dim2)\n",
" assert(result.shape[2] == dim3)\n",
" \n",
" return result"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example 1:"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]\n",
"\n",
" [[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]\n",
"\n",
" [[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]]\n"
]
}
],
"source": [
"spectrograms = get_data_1()\n",
"start = 0\n",
"end = 2\n",
"print(slice_fixed_point(spectrograms, start, end))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Expected Output 1\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" \n",
" slice_fixed_point( spectrograms, start, end) | \n",
" \n",
" [[[3. 1. 4. 1.] \n",
" [5. 9. 2. 6.]] \n",
"\n",
" [[9. 7. 9. 3.] \n",
" [2. 3. 8. 4.]] \n",
"\n",
" [[6. 2. 6. 4.] \n",
" [3. 3. 8. 3.]]] \n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example 2:"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n"
]
}
],
"source": [
"data = get_data_2()\n",
"s = 2\n",
"m = 5\n",
"print(slice_fixed_point(data, s, m)[1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output 2**: \n",
"\n",
" \n",
" slice_fixed_point( data, s, m)[1] | \n",
" \n",
"[[8. 7. 0. 3. 8. 7. 7. 1. 8. 4. 7. 0. 4. 9. 0. 6. 4. 2. 4. 6. 3. 3. 7. 8. \n",
" 5. 0. 8. 5. 4. 7. 4. 1. 3. 3. 9. 2. 5. 2. 3. 5.] \n",
" [7. 2. 7. 1. 6. 5. 0. 0. 3. 1. 9. 9. 6. 6. 7. 8. 8. 7. 0. 8. 6. 8. 9. 8. \n",
" 3. 6. 1. 7. 4. 9. 2. 0. 8. 2. 7. 8. 4. 4. 1. 7.] \n",
" [6. 9. 4. 1. 5. 9. 7. 1. 3. 5. 7. 3. 6. 6. 7. 9. 1. 9. 6. 0. 3. 8. 4. 1. \n",
" 4. 5. 0. 3. 1. 4. 4. 4. 0. 0. 8. 4. 6. 9. 3. 3.]]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.3 Slicing: Random Point\n",
"Takes one 3-dimensional array with the length of the output instances. Your task is to slice the instances from a random point in each of the utterances with the given length. Please use function `numpy.random.randint` for generating the starting position.\n",
"\n",
"To the extent that it is helpful, a formal description provided in the Appendix.\n",
"\n",
"**Your Task:** Implement the function `slice_random_point`."
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"def slice_random_point(x, d):\n",
" \"\"\"\n",
" Takes one 3-dimensional array with the length of the output instances.\n",
" Your task is to slice the instances from a random point in each of the\n",
" utterances with the given length. Please use offset and refer to their \n",
" mathematical correspondance.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-d numpy array with 2-d numpy arrays as elements (n, ?, k).\n",
" d (int): The resulting size of the data in dimension 2.\n",
" \n",
" Returns: \n",
" numpy.ndarray: A 3-dimensional int numpy array of shape (n, d, k)\n",
" \"\"\"\n",
" spectrograms = x\n",
" \n",
" # Input function dimension specification\n",
" assert(spectrograms.ndim == 1)\n",
" for utter in spectrograms:\n",
" assert(utter.ndim == 2)\n",
" assert(utter.shape[0] >= d)\n",
"\n",
" offset = [np.random.randint(utter.shape[0]-d+1)\n",
" if utter.shape[0]-d > 0 else 0\n",
" for utter in spectrograms]\n",
"\n",
" # Pre-define output function dimension specification\n",
" dim1 = spectrograms.shape[0] # n\n",
" dim2 = d # d\n",
" dim3 = spectrograms[0].shape[1] # k\n",
"\n",
" result = np.zeros((dim1,dim2,dim3))\n",
"\n",
" #### Start of your code ####\n",
"\n",
" \n",
" \n",
" \n",
" #### End of your code ####\n",
"\n",
" # Assert output function dimension specification\n",
" assert(result.shape[0] == dim1)\n",
" assert(result.shape[1] == dim2)\n",
" assert(result.shape[2] == dim3)\n",
" \n",
" return result"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example 1:"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]\n",
"\n",
" [[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]\n",
"\n",
" [[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]]\n"
]
}
],
"source": [
"np.random.seed(1)\n",
"spectrograms = get_data_1()\n",
"duration = 2\n",
"print(slice_random_point(spectrograms, duration))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Expected Output 1\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" \n",
" slice_random_point( spectrograms, duration) | \n",
" \n",
" [[[5. 9. 2. 6.] \n",
" [5. 3. 5. 8.]] \n",
"\n",
" [[9. 7. 9. 3.] \n",
" [2. 3. 8. 4.]] \n",
"\n",
" [[6. 2. 6. 4.] \n",
" [3. 3. 8. 3.]]] \n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example 2:"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n"
]
}
],
"source": [
"data = get_data_2()\n",
"d = 4\n",
"print(slice_random_point(data, d)[1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output 2**: \n",
"\n",
" \n",
" slice_random_point( data, d)[1] | \n",
" \n",
"[[3. 3. 7. 9. 9. 9. 7. 3. 2. 3. 9. 7. 7. 5. 1. 2. 2. 8. 1. 5. 8. 4. 0. 2. \n",
" 5. 5. 0. 8. 1. 1. 0. 3. 8. 8. 4. 4. 0. 9. 3. 7.] \n",
" [3. 2. 1. 1. 2. 1. 4. 2. 5. 5. 5. 2. 5. 7. 7. 6. 1. 6. 7. 2. 3. 1. 9. 5. \n",
" 9. 9. 2. 0. 9. 1. 9. 0. 6. 0. 4. 8. 4. 3. 3. 8.] \n",
" [8. 7. 0. 3. 8. 7. 7. 1. 8. 4. 7. 0. 4. 9. 0. 6. 4. 2. 4. 6. 3. 3. 7. 8. \n",
" 5. 0. 8. 5. 4. 7. 4. 1. 3. 3. 9. 2. 5. 2. 3. 5.] \n",
" [7. 2. 7. 1. 6. 5. 0. 0. 3. 1. 9. 9. 6. 6. 7. 8. 8. 7. 0. 8. 6. 8. 9. 8. \n",
" 3. 6. 1. 7. 4. 9. 2. 0. 8. 2. 7. 8. 4. 4. 1. 7.]]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.4 Padding: Ending Pattern\n",
"Takes one 3-dimensional array. Your task is to pad the instances from the end position as shown in the example below. That is, you need to pad the reflection of the utterance mirrored along the edge values of the array.\n",
"\n",
"To the extent that it is helpful, a formal description provided in the Appendix.\n",
"\n",
"**Your Task:** Implement the function `pad_ending_pattern`."
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"def pad_ending_pattern(x):\n",
" \"\"\"\n",
" Takes one 3-dimensional array. Your task is to pad the instances from \n",
" the end position as shown in the example below. That is, you need to \n",
" pads with the reflection of the vector mirrored along the edge of the array.\n",
" \n",
" Parameters: \n",
" x (numpy.ndarray): 1-d numpy array with 2-d numpy arrays as elements.\n",
" \n",
" Returns: \n",
" numpy.ndarray: 3-dimensional int numpy array\n",
" \"\"\"\n",
" spectrograms = x\n",
" \n",
" # Input function dimension specification\n",
" assert(spectrograms.ndim == 1)\n",
" for utter in spectrograms:\n",
" assert(utter.ndim == 2)\n",
"\n",
" # Pre-define output function dimension specification\n",
" dim1 = spectrograms.shape[0] # n\n",
" dim2 = max([utter.shape[0] for utter in spectrograms]) # m\n",
" dim3 = spectrograms[0].shape[1] # k\n",
"\n",
" result = np.zeros((dim1, dim2, dim3))\n",
"\n",
" #### Start of your code ####\n",
"\n",
" \n",
" \n",
" #### End of your code ####\n",
"\n",
" # Assert output function dimension specification\n",
" assert(result.shape[0] == dim1)\n",
" assert(result.shape[1] == dim2)\n",
" assert(result.shape[2] == dim3)\n",
" \n",
" return result"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example 1:"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]\n",
"\n",
" [[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]\n",
"\n",
" [[0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]\n",
" [0. 0. 0. 0.]]]\n"
]
}
],
"source": [
"spectrograms = get_data_1()\n",
"duration = 2\n",
"print(pad_ending_pattern(spectrograms))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Expected Output 1\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" \n",
" pad_ending_pattern(spectrograms) | \n",
" \n",
"[[[3. 1. 4. 1.] \n",
" [5. 9. 2. 6.] \n",
" [5. 3. 5. 8.] \n",
" [5. 3. 5. 8.]] \n",
"\n",
" [[9. 7. 9. 3.] \n",
" [2. 3. 8. 4.] \n",
" [2. 3. 8. 4.] \n",
" [9. 7. 9. 3.]] \n",
"\n",
" [[6. 2. 6. 4.] \n",
" [3. 3. 8. 3.] \n",
" [2. 7. 9. 5.] \n",
" [0. 2. 8. 8.]]]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example 2:"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n"
]
}
],
"source": [
"data = get_data_2()\n",
"print(pad_ending_pattern(data)[4])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output 2**: \n",
"\n",
" \n",
" pad_ending_pattern( data)[4] | \n",
" \n",
"[[8. 2. 4. 3. 1. 6. 5. 8. 4. 3. 6. 5. 3. 7. 8. 8. 3. 7. 8. 5. 7. 2. 7. 8. \n",
" 0. 7. 4. 8. 4. 4. 0. 4. 8. 0. 0. 4. 7. 3. 7. 7.] \n",
" [2. 2. 1. 7. 0. 7. 5. 9. 7. 1. 1. 2. 4. 1. 4. 5. 8. 2. 1. 6. 3. 0. 3. 9. \n",
" 5. 1. 3. 7. 1. 1. 7. 9. 4. 2. 0. 3. 2. 4. 0. 0.] \n",
" [9. 3. 8. 3. 0. 4. 4. 0. 2. 5. 5. 8. 2. 7. 3. 6. 1. 0. 2. 2. 5. 5. 1. 2. \n",
" 8. 7. 3. 7. 3. 1. 0. 1. 0. 8. 8. 5. 3. 3. 1. 0.] \n",
" [6. 1. 6. 9. 5. 7. 0. 1. 4. 9. 5. 1. 6. 5. 4. 4. 4. 7. 2. 2. 6. 5. 3. 0. \n",
" 8. 8. 1. 8. 7. 5. 7. 9. 4. 0. 7. 2. 3. 9. 5. 4.] \n",
" [0. 4. 5. 8. 1. 4. 8. 0. 1. 1. 8. 9. 4. 9. 0. 3. 0. 7. 0. 8. 1. 2. 8. 5. \n",
" 8. 2. 1. 3. 5. 0. 2. 5. 8. 6. 2. 7. 7. 1. 8. 4.] \n",
" [9. 3. 3. 2. 9. 0. 4. 6. 4. 3. 2. 3. 1. 1. 2. 7. 2. 7. 0. 1. 8. 0. 5. 2. \n",
" 8. 0. 4. 0. 3. 8. 1. 6. 4. 6. 9. 6. 4. 7. 2. 9.] \n",
" [9. 3. 3. 2. 9. 0. 4. 6. 4. 3. 2. 3. 1. 1. 2. 7. 2. 7. 0. 1. 8. 0. 5. 2. \n",
" 8. 0. 4. 0. 3. 8. 1. 6. 4. 6. 9. 6. 4. 7. 2. 9.] \n",
" [0. 4. 5. 8. 1. 4. 8. 0. 1. 1. 8. 9. 4. 9. 0. 3. 0. 7. 0. 8. 1. 2. 8. 5. \n",
" 8. 2. 1. 3. 5. 0. 2. 5. 8. 6. 2. 7. 7. 1. 8. 4.]]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.5 Padding: Constant Central Pattern\n",
"Takes one 3-dimensional array with the constant value of padding. Your task is to pad the instances with the given constant value while maintaining the array at the center of the padding.\n",
"\n",
"To the extent that it is helpful, a formal description provided in the Appendix.\n",
"\n",
"**Your Task:** Implement the function `pad_constant_central_pattern`."
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"def pad_constant_central_pattern(x, cval):\n",
" \"\"\"\n",
" Takes one 3-dimensional array with the constant value of padding. \n",
" Your task is to pad the instances with the given constant value while\n",
" maintaining the array at the center of the padding.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-d numpy array with 2-d numpy arrays as elements.\n",
" cval (numpy.int64): scalar quantity.\n",
" \n",
" Returns: \n",
" numpy.ndarray: 3-dimensional int numpy array, (n, m, k).\n",
" \"\"\"\n",
" spectrograms = x\n",
" \n",
" # Input function dimension specification\n",
" assert(spectrograms.ndim == 1)\n",
" for utter in spectrograms:\n",
" assert(utter.ndim == 2)\n",
"\n",
" dim1 = spectrograms.shape[0] # n\n",
" dim2 = max([utter.shape[0] for utter in spectrograms]) # m\n",
" dim3 = spectrograms[0].shape[1] # k\n",
"\n",
" result = np.ones((dim1,dim2,dim3))\n",
"\n",
" #### Start of your code ####\n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" #### End of your code ####\n",
"\n",
" # Assert output function dimension specification\n",
" assert(result.shape[0] == dim1)\n",
" assert(result.shape[1] == dim2)\n",
" assert(result.shape[2] == dim3)\n",
" \n",
" return result"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example 1:"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[[1. 1. 1. 1.]\n",
" [1. 1. 1. 1.]\n",
" [1. 1. 1. 1.]\n",
" [1. 1. 1. 1.]]\n",
"\n",
" [[1. 1. 1. 1.]\n",
" [1. 1. 1. 1.]\n",
" [1. 1. 1. 1.]\n",
" [1. 1. 1. 1.]]\n",
"\n",
" [[1. 1. 1. 1.]\n",
" [1. 1. 1. 1.]\n",
" [1. 1. 1. 1.]\n",
" [1. 1. 1. 1.]]]\n"
]
}
],
"source": [
"spectrograms = get_data_1()\n",
"duration = 2\n",
"print(pad_constant_central_pattern(spectrograms, 0))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Expected Output 1\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
" \n",
" pad_constant_central_pattern(spectrograms, 0) | \n",
" \n",
"[[[3. 1. 4. 1.] \n",
" [5. 9. 2. 6.] \n",
" [5. 3. 5. 8.] \n",
" [0. 0. 0. 0.]] \n",
"\n",
" [[0. 0. 0. 0.] \n",
" [9. 7. 9. 3.] \n",
" [2. 3. 8. 4.] \n",
" [0. 0. 0. 0.]] \n",
"\n",
" [[6. 2. 6. 4.] \n",
" [3. 3. 8. 3.] \n",
" [2. 7. 9. 5.] \n",
" [0. 2. 8. 8.]]]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example 2:"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n",
" 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n",
" [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n",
" 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n",
" [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n",
" 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n",
" [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n",
" 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n",
" [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n",
" 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n",
" [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n",
" 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n",
" [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n",
" 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n",
" [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n",
" 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]\n"
]
}
],
"source": [
"data = get_data_2()\n",
"print(pad_constant_central_pattern(data, cval = 0)[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output 2**: \n",
"\n",
" \n",
" pad_constant_central_pattern( data, cval = 0)[0] | \n",
" \n",
"[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. \n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] \n",
" [3. 5. 2. 4. 7. 6. 8. 8. 1. 6. 7. 7. 8. 1. 5. 9. 8. 9. 4. 3. 0. 3. 5. 0. \n",
" 2. 3. 8. 1. 3. 3. 3. 7. 0. 1. 9. 9. 0. 4. 7. 3.] \n",
" [2. 7. 2. 0. 0. 4. 5. 5. 6. 8. 4. 1. 4. 9. 8. 1. 1. 7. 9. 9. 3. 6. 7. 2. \n",
" 0. 3. 5. 9. 4. 4. 6. 4. 4. 3. 4. 4. 8. 4. 3. 7.] \n",
" [5. 5. 0. 1. 5. 9. 3. 0. 5. 0. 1. 2. 4. 2. 0. 3. 2. 0. 7. 5. 9. 0. 2. 7. \n",
" 2. 9. 2. 3. 3. 2. 3. 4. 1. 2. 9. 1. 4. 6. 8. 2.] \n",
" [3. 0. 0. 6. 0. 6. 3. 3. 8. 8. 8. 2. 3. 2. 0. 8. 8. 3. 8. 2. 8. 4. 3. 0. \n",
" 4. 3. 6. 9. 8. 0. 8. 5. 9. 0. 9. 6. 5. 3. 1. 8.] \n",
" [0. 4. 9. 6. 5. 7. 8. 8. 9. 2. 8. 6. 6. 9. 1. 6. 8. 8. 3. 2. 3. 6. 3. 6. \n",
" 5. 7. 0. 8. 4. 6. 5. 8. 2. 3. 9. 7. 5. 3. 4. 5.] \n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. \n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] \n",
" [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. \n",
" 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. PyTorch\n",
"PyTorch is an open-source deep learning library for python, and will be the primary framework throughout the course. You can install PyTorch referring to https://PyTorch.org/get-started/locally/. One of the fundamental concepts in PyTorch is the Tensor, a multi-dimensional matrix containing elements of a single type. Tensors are similar to numpy nd-arrays and tensors support most of the functionality that numpy matrices do.\n",
"\n",
"In following exercises, you will familiarize yourself with tensors and more importantly, the PyTorch documentation. It is important to note that for this section we are simply using PyTorch’s tensors as a matrix library, just like numpy. So please do not use functions in torch.nn, like torch.nn.ReLU.\n",
"\n",
"In PyTorch, it is very simple to convert between numpy arrays and tensors. PyTorch’s tensor library provides functions to perform the conversion in either direction. In this task, you are to write 2 functions:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.1 Converting from NumPy to PyTorch Tensor\n",
"In this task, you will implement a conversion function from arrays to tensors.\n",
"\n",
"The function should take a numpy ndarray and convert it to a PyTorch tensor.\n",
"\n",
"*Function torch.tensor is one of the simple ways to implement it but please do not use it this time. The PyTorch environment installed on Autolab is not an up-to-date version and does not support this function.*\n",
"\n",
"**Your Task**: Implement the function `numpy2tensor`."
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"def numpy2tensor(x):\n",
" \"\"\"\n",
" Creates a torch.Tensor from a numpy.ndarray.\n",
"\n",
" Parameters: \n",
" x (numpy.ndarray): 1-dimensional numpy array.\n",
"\n",
" Returns: \n",
" torch.Tensor: 1-dimensional torch tensor.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"X = np.random.randint(-1000, 1000, size=3000)\n",
"\n",
"print(type(numpy2tensor(X)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"\n",
" \n",
" type(numpy2tensor(X)) | \n",
" <class 'torch.Tensor'> | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.2 Converting from PyTorch Tensor to NumPy\n",
"\n",
"In this task, you will implement a conversion function from tensors to arrays.\n",
"\n",
"The function should take a PyTorch tensor and convert it to a numpy ndarray.\n",
"\n",
"**Your Task**: Implement the function `tensor2numpy`. "
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [],
"source": [
"def tensor2numpy(x):\n",
" \"\"\"\n",
" Creates a numpy.ndarray from a torch.Tensor.\n",
"\n",
" Parameters:\n",
" x (torch.Tensor): 1-dimensional torch tensor.\n",
"\n",
" Returns:\n",
" numpy.ndarray: 1-dimensional numpy array.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"X = np.random.randint(-1000, 1000, size=3000)\n",
"X = torch.from_numpy(X)\n",
"\n",
"print(type(tensor2numpy(X)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"\n",
" \n",
" type(tensor2numpy(X)) | \n",
" <class 'numpy.ndarray'> | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.3 Tensor Sum-Products\n",
"\n",
"In this task, you will implement the sum-product function for PyTorch Tensors. \n",
"\n",
"The function should take two tensors as input and return the sum over all the dimensions of the outer product of two vectors.\n",
"\n",
"**Your Task**: Implement the function `tensor_sumproducts`."
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"def tensor_sumproducts(x,y):\n",
" \"\"\"\n",
" Sum over all the dimensions of the outer product of two vectors.\n",
"\n",
" Parameters: \n",
" x (torch.Tensor): 1-dimensional torch tensor.\n",
" y (torch.Tensor): 1-dimensional torch tensor.\n",
"\n",
" Returns: \n",
" torch.int64: scalar quantity.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NotImplemented\n"
]
}
],
"source": [
"np.random.seed(0)\n",
"X = np.random.randint(-1000, 1000, size=3000)\n",
"X = torch.from_numpy(X)\n",
"Y = np.random.randint(-1000, 1000, size=3000)\n",
"Y = torch.from_numpy(Y)\n",
"\n",
"print(tensor_sumproducts(X,Y))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"\n",
" \n",
" tensor_sumproducts(X,Y) | \n",
" tensor(265421520) | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.4 Tensor ReLU\n",
"\n",
"In this task, you will implement the ReLU function for PyTorch Tensors.\n",
"\n",
"The function should take a tensor and calculate the ReLU function on each element.\n",
"\n",
"**Your Task**: Implement the function `tensor_ReLU_prime`."
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [],
"source": [
"def tensor_ReLU(x):\n",
" \"\"\"\n",
" Applies the rectified linear unit function element-wise.\n",
"\n",
" Parameters: \n",
" x (torch.Tensor): 2-dimensional torch tensor.\n",
"\n",
" Returns: \n",
" torch.Tensor: 2-dimensional torch tensor.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NotImplemented\n"
]
}
],
"source": [
"np.random.seed(0)\n",
"X = np.random.randint(-1000, 1000, size=(1000,1000))\n",
"X = torch.from_numpy(X)\n",
"\n",
"print(tensor_ReLU(X))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"\n",
" \n",
" tensor_ReLU(X) | \n",
" \n",
" tensor([[ 0, 0, 653, ..., 0, 0, 0], \n",
" [ 0, 0, 0, ..., 988, 0, 0], \n",
" [265, 0, 608, ..., 773, 961, 0], \n",
" ..., \n",
" [429, 102, 0, ..., 467, 118, 0], \n",
" [532, 55, 0, ..., 912, 779, 294], \n",
" [ 0, 51, 0, ..., 0, 0, 0]])\n",
" | \n",
"
\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.5 Tensor ReLU Prime\n",
"\n",
"In this task, you will implement the ReLU Prime function for PyTorch Tensors.\n",
"\n",
"The function should take a tensor and calculate the derivative of the ReLU function on each element.\n",
"\n",
"**Your Task:** Implement the function `tensor_ReLU_prime`."
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {},
"outputs": [],
"source": [
"def tensor_ReLU_prime(x):\n",
" \"\"\"\n",
" Applies derivative of the rectified linear unit function \n",
" element-wise.\n",
"\n",
" Parameters: \n",
" x (torch.Tensor): 2-dimensional torch tensor.\n",
"\n",
" Returns: \n",
" torch.Tensor: 2-dimensional torch tensor.\n",
" \"\"\"\n",
"\n",
" return NotImplemented"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Test Example:"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NotImplemented\n"
]
}
],
"source": [
"np.random.seed(0)\n",
"X = np.random.randint(-1000, 1000, size=(1000,1000))\n",
"X = torch.from_numpy(X)\n",
"\n",
"print(tensor_ReLU_prime(X))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output**: \n",
"\n",
" \n",
" tensor_ReLU_prime(X) | \n",
" \n",
" tensor([[0, 0, 1, ..., 0, 0, 0], \n",
" [0, 0, 0, ..., 1, 0, 0], \n",
" [1, 0, 1, ..., 1, 1, 0], \n",
" ..., \n",
" [1, 1, 0, ..., 1, 1, 0], \n",
" [1, 1, 0, ..., 1, 1, 1], \n",
" [0, 1, 0, ..., 0, 0, 0]])\n",
" | \n",
"
\n",
"
"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
handout/images/._.DS_Store 000644 000765 000024 00000000170 13521701455 016013 0 ustar 00work staff 000000 000000 Mac OS X 2 F x @ ATTR x x handout/images/.DS_Store 000644 000765 000024 00000014004 13521701455 015577 0 ustar 00work staff 000000 000000 Bud1 1 . p n gIl e x 3 - 1 . p n gIlocblob ; ( e x 3 - 2 . p n gIlocblob ( e x 3 - 3 . p n gIlocblob ( e x 3 - 4 . p n gIlocblob e x 3 - 5 . p n gIlocblob ( e x 3 . p n gIlocblob 7 @ @ @ @ E DSDB ` @ @ @ handout/images/._ex3.png 000644 000765 000024 00000001023 13521531445 015533 0 ustar 00work staff 000000 000000 Mac OS X 2 ATTR P P com.apple.lastuseddate#PS ` * *com.apple.metadata:kMDItemIsScreenCapture V 2com.apple.metadata:kMDItemScreenCaptureGlobalRect 3 ,com.apple.metadata:kMDItemScreenCaptureType %F] 1 bplist00 bplist00#@l` #@k #@ #@
( 1bplist00Yselection handout/images/ex3.png 000644 000765 000024 00000552661 13521531445 015341 0 ustar 00work staff 000000 000000 PNG
IHDR @ )# JiCCPICC Profile HWTS[RIhH RK E*I ĘDʲ
]D@]U] kE]`_ˋ*oR@=sΗsg ЫI> BYbTkRzH@pRvBB,2]^ P I8K @| /@zYRTs4T4Jmȁx d'@YEȣ{bw@,@q0_@
*S<9r&ri>oY-!DDUΰnfĨ0
^IV\<ĆT":EcX3]HI~\V-BWZ,.&k.#C8[ak6dj*ӊHU"JNČQĩqB̔%hl0'nFHToqPǦe"PRՅh->O -B ;eG(;@"hŔ°D=NG6ˋsB 5x0!Y'/- @iF"@7fhFzDI ȇ煩G?k5O7-R . 1 VgIߡF|k>lP(xYzCb81It`<Pzb!з7ro9|QuŝRFPB)N_ufQ
ib+gxk/*-}זR v;ǎ`̀ZKQ^EWѐDu