quacknet.activationFunctions
1import numpy as np 2 3def relu(values, alpha = 0.01): 4 """ 5 Applies Leaky Rectified Linear Unit (ReLU) activation function. 6 7 Args: 8 values (ndarray): Input array to apply leaky ReLU to. 9 alpha (float, optional): Slope for negative values. Default is 0.01. 10 11 Returns: 12 ndarray: Array with Leaky ReLU applied to it. 13 """ 14 return np.maximum(values * alpha, values) 15 16def sigmoid(values): 17 """ 18 Applies the sigmoid activation function. 19 20 Args: 21 values (ndarray): Input array to apply sigmoid. 22 23 Returns: 24 ndarray: Array with sigmoid applied to it. 25 """ 26 return 1 / (1 + np.exp(-values)) 27 28def tanH(values): 29 """ 30 Applies the hyperbolic tangent (tanh) activation function. 31 32 Args: 33 values (ndarray): Input array to apply tanh. 34 35 Returns: 36 ndarray: Array with tanh applied to it. 37 """ 38 return np.tanh(values) 39 40def linear(values): #Dont use too demanding on CPU 41 """ 42 Applies the linear activation function. 43 44 Args: 45 values (ndarray): Input array. 46 47 Returns: 48 ndarray: Output array (same as input). 49 """ 50 return values 51 52def softMax(values): 53 """ 54 Applies the softmax activation function. 55 56 Args: 57 values (ndarray): Input array to apply softmax. 58 59 Returns: 60 ndarray: Array with softmax applied to it. 61 62 Note: 63 The function handles overflow by subtracting the maximum value from inputs. 64 """ 65 values = np.array(values, dtype=np.float64) 66 maxVal = np.max(values) 67 values = values - maxVal 68 summ = np.sum(np.exp(values)) 69 out = np.exp(values) / summ 70 return out
4def relu(values, alpha = 0.01): 5 """ 6 Applies Leaky Rectified Linear Unit (ReLU) activation function. 7 8 Args: 9 values (ndarray): Input array to apply leaky ReLU to. 10 alpha (float, optional): Slope for negative values. Default is 0.01. 11 12 Returns: 13 ndarray: Array with Leaky ReLU applied to it. 14 """ 15 return np.maximum(values * alpha, values)
Applies Leaky Rectified Linear Unit (ReLU) activation function.
Args: values (ndarray): Input array to apply leaky ReLU to. alpha (float, optional): Slope for negative values. Default is 0.01.
Returns: ndarray: Array with Leaky ReLU applied to it.
17def sigmoid(values): 18 """ 19 Applies the sigmoid activation function. 20 21 Args: 22 values (ndarray): Input array to apply sigmoid. 23 24 Returns: 25 ndarray: Array with sigmoid applied to it. 26 """ 27 return 1 / (1 + np.exp(-values))
Applies the sigmoid activation function.
Args: values (ndarray): Input array to apply sigmoid.
Returns: ndarray: Array with sigmoid applied to it.
29def tanH(values): 30 """ 31 Applies the hyperbolic tangent (tanh) activation function. 32 33 Args: 34 values (ndarray): Input array to apply tanh. 35 36 Returns: 37 ndarray: Array with tanh applied to it. 38 """ 39 return np.tanh(values)
Applies the hyperbolic tangent (tanh) activation function.
Args: values (ndarray): Input array to apply tanh.
Returns: ndarray: Array with tanh applied to it.
41def linear(values): #Dont use too demanding on CPU 42 """ 43 Applies the linear activation function. 44 45 Args: 46 values (ndarray): Input array. 47 48 Returns: 49 ndarray: Output array (same as input). 50 """ 51 return values
Applies the linear activation function.
Args: values (ndarray): Input array.
Returns: ndarray: Output array (same as input).
53def softMax(values): 54 """ 55 Applies the softmax activation function. 56 57 Args: 58 values (ndarray): Input array to apply softmax. 59 60 Returns: 61 ndarray: Array with softmax applied to it. 62 63 Note: 64 The function handles overflow by subtracting the maximum value from inputs. 65 """ 66 values = np.array(values, dtype=np.float64) 67 maxVal = np.max(values) 68 values = values - maxVal 69 summ = np.sum(np.exp(values)) 70 out = np.exp(values) / summ 71 return out
Applies the softmax activation function.
Args: values (ndarray): Input array to apply softmax.
Returns: ndarray: Array with softmax applied to it.
Note: The function handles overflow by subtracting the maximum value from inputs.