quacknet.activationDerivativeFunctions
1import numpy as np 2 3def ReLUDerivative(values): 4 """ 5 Applies Leaky Rectified Linear Unit (ReLU) derivative activation function. 6 7 Args: 8 values (ndarray): Input array to differantiate. 9 10 Returns: 11 ndarray: Array with Leaky ReLU derivative applied to it. 12 """ 13 #return np.where(values > 0, 1, 0) 14 return np.where(values > 0, 1, 0.01) # This is leaky ReLU, to prevent weight gradeints all becoming 0 15 16def sigmoid(values): # used for sigmoid derivative and is used to remove importing from activationFunctions.py 17 return 1 / (1 + np.exp(-values)) 18 19def SigmoidDerivative(values): 20 """ 21 Applies sigmoid derivative activation function. 22 23 Args: 24 values (ndarray): Input array to differantiate. 25 26 Returns: 27 ndarray: Array with sigmoid derivative applied to it. 28 """ 29 return sigmoid(values) * (1 - sigmoid(values)) 30 31def TanHDerivative(values): 32 """ 33 Applies hyperbolic tangent (tanh) derivative activation function. 34 35 Args: 36 values (ndarray): Input array to differantiate. 37 38 Returns: 39 ndarray: Array with tanh derivative applied to it. 40 """ 41 return 1 - (np.tanh(values) ** 2) 42 43def LinearDerivative(values): 44 """ 45 Applies linear derivative activation function. 46 47 Args: 48 values (ndarray): Input array to differantiate. 49 50 Returns: 51 ndarray: Array with linear derivative applied to it. 52 53 Note: 54 the derivative is the list but every element is 1. 55 """ 56 return np.ones_like(values) 57 58def SoftMaxDerivative(trueValue, values): 59 #from .lossDerivativeFunctions import CrossEntropyLossDerivative 60 #if(lossDerivative == CrossEntropyLossDerivative): 61 # return values - trueValue 62 #summ = 0 63 #for i in range(len(values)): 64 # if(currValueIndex == i): 65 # jacobianMatrix = values[currValueIndex] * (1 - values[currValueIndex]) 66 # else: 67 # jacobianMatrix = -1 * values[currValueIndex] * values[i] 68 # summ += lossDerivative(values[i], trueValue[i], len(values)) * jacobianMatrix 69 #return summ 70 71 """ 72 Applies softmax derivative activation function. 73 74 Args: 75 trueValue (ndarray): True labels for the input. 76 values (ndarray): Predicted softmax output array. 77 78 Returns: 79 ndarray: Array with softmax derivative applied to it. 80 81 Note: 82 this library forces cross entropy if softmax is used so it simplifies to: values - trueValue 83 """ 84 return values - trueValue #the simplification is due to cross entropy and softmax being used at the same time which is forced by library
4def ReLUDerivative(values): 5 """ 6 Applies Leaky Rectified Linear Unit (ReLU) derivative activation function. 7 8 Args: 9 values (ndarray): Input array to differantiate. 10 11 Returns: 12 ndarray: Array with Leaky ReLU derivative applied to it. 13 """ 14 #return np.where(values > 0, 1, 0) 15 return np.where(values > 0, 1, 0.01) # This is leaky ReLU, to prevent weight gradeints all becoming 0
Applies Leaky Rectified Linear Unit (ReLU) derivative activation function.
Args: values (ndarray): Input array to differantiate.
Returns: ndarray: Array with Leaky ReLU derivative applied to it.
20def SigmoidDerivative(values): 21 """ 22 Applies sigmoid derivative activation function. 23 24 Args: 25 values (ndarray): Input array to differantiate. 26 27 Returns: 28 ndarray: Array with sigmoid derivative applied to it. 29 """ 30 return sigmoid(values) * (1 - sigmoid(values))
Applies sigmoid derivative activation function.
Args: values (ndarray): Input array to differantiate.
Returns: ndarray: Array with sigmoid derivative applied to it.
32def TanHDerivative(values): 33 """ 34 Applies hyperbolic tangent (tanh) derivative activation function. 35 36 Args: 37 values (ndarray): Input array to differantiate. 38 39 Returns: 40 ndarray: Array with tanh derivative applied to it. 41 """ 42 return 1 - (np.tanh(values) ** 2)
Applies hyperbolic tangent (tanh) derivative activation function.
Args: values (ndarray): Input array to differantiate.
Returns: ndarray: Array with tanh derivative applied to it.
44def LinearDerivative(values): 45 """ 46 Applies linear derivative activation function. 47 48 Args: 49 values (ndarray): Input array to differantiate. 50 51 Returns: 52 ndarray: Array with linear derivative applied to it. 53 54 Note: 55 the derivative is the list but every element is 1. 56 """ 57 return np.ones_like(values)
Applies linear derivative activation function.
Args: values (ndarray): Input array to differantiate.
Returns: ndarray: Array with linear derivative applied to it.
Note: the derivative is the list but every element is 1.
59def SoftMaxDerivative(trueValue, values): 60 #from .lossDerivativeFunctions import CrossEntropyLossDerivative 61 #if(lossDerivative == CrossEntropyLossDerivative): 62 # return values - trueValue 63 #summ = 0 64 #for i in range(len(values)): 65 # if(currValueIndex == i): 66 # jacobianMatrix = values[currValueIndex] * (1 - values[currValueIndex]) 67 # else: 68 # jacobianMatrix = -1 * values[currValueIndex] * values[i] 69 # summ += lossDerivative(values[i], trueValue[i], len(values)) * jacobianMatrix 70 #return summ 71 72 """ 73 Applies softmax derivative activation function. 74 75 Args: 76 trueValue (ndarray): True labels for the input. 77 values (ndarray): Predicted softmax output array. 78 79 Returns: 80 ndarray: Array with softmax derivative applied to it. 81 82 Note: 83 this library forces cross entropy if softmax is used so it simplifies to: values - trueValue 84 """ 85 return values - trueValue #the simplification is due to cross entropy and softmax being used at the same time which is forced by library
Applies softmax derivative activation function.
Args: trueValue (ndarray): True labels for the input. values (ndarray): Predicted softmax output array.
Returns: ndarray: Array with softmax derivative applied to it.
Note: this library forces cross entropy if softmax is used so it simplifies to: values - trueValue