Churn Rate Prediction for a bank

The basic aim of this notebook is to predict customer churn for a certain bank i.e. which customer is going to leave this bank service.

Neural network will be used as the modelling method for this notebook. The dataset used in this notebook is introduced by Pushkar Mandot on his blog post.

The dataset can be down load here.

The dataset contains 10000 rows with 14 columns. I am not explaining data in detail as dataset is self explanatory.

Importing data

In [1]:
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('data\Churn_Modelling.csv')
print(dataset.shape)
dataset.head()
(10000, 14)
Out[1]:
RowNumber CustomerId Surname CreditScore Geography Gender Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited
0 1 15634602 Hargrave 619 France Female 42 2 0.00 1 1 1 101348.88 1
1 2 15647311 Hill 608 Spain Female 41 1 83807.86 1 0 1 112542.58 0
2 3 15619304 Onio 502 France Female 42 8 159660.80 3 1 0 113931.57 1
3 4 15701354 Boni 699 France Female 39 1 0.00 2 0 0 93826.63 0
4 5 15737888 Mitchell 850 Spain Female 43 2 125510.82 1 1 1 79084.10 0

Creating training and test set

Create matrix of features and matrix of target variable. In this case we are excluding column 1 & 2 as those are ‘row_number’ and ‘customerid’ which are not useful in our analysis. Column 14, ‘Exited’ is our Target Variable

In [2]:
X = dataset.iloc[:, 3:13].values
y = dataset.iloc[:, 13].values

Let us take a glimpse on the predictors in X. As can be seen below, the dataset is pretty clean. Only two columns of string variables need to be transferred to categorical variables or one hot values in order to be fed into a classifier.

In [3]:
print(X)
[[619 'France' 'Female' ... 1 1 101348.88]
 [608 'Spain' 'Female' ... 0 1 112542.58]
 [502 'France' 'Female' ... 1 0 113931.57]
 ...
 [709 'France' 'Female' ... 0 1 42085.58]
 [772 'Germany' 'Male' ... 1 0 92888.52]
 [792 'France' 'Female' ... 1 0 38190.78]]

The target y only contains 0s and 1s as 0 stands for customers still with us, and 1 represents customers left us.

In [4]:
print(y)
[1 0 1 ... 1 1 0]

Encoding categorical variables: we need to use the LabelEncoder and OneHotEncoder from the sklearn to transform string variables in X. Use LabelEncoder first to encode different labels in a certain column to numbers between 0 to n_class-1. Then, use OneHotEncoder to tranform the numbers into one hot manner.

In [5]:
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
X
Out[5]:
array([[619, 0, 0, ..., 1, 1, 101348.88],
       [608, 2, 0, ..., 0, 1, 112542.58],
       [502, 0, 0, ..., 1, 0, 113931.57],
       ...,
       [709, 0, 0, ..., 0, 1, 42085.58],
       [772, 1, 1, ..., 1, 0, 92888.52],
       [792, 0, 0, ..., 1, 0, 38190.78]], dtype=object)

Now you can see that Country names are replaced by 0,1 and 2 while male and female are replaced by 0 and 1.

Label encoding has introduced new problem in our data. LabelEncoder has replaced France with 0, Germany 1 and Spain 2 but Germany is not higher than France and France is not smaller than Spain so we need to create a dummy variable for Country. We don’t need to do same for Gender Variable as it is binary.

Here, we use the OneHotEncoder to do the job.

In [6]:
onehotencoder = OneHotEncoder(categorical_features = [1])
X = onehotencoder.fit_transform(X).toarray()
X = X[:, 1:]
X
Out[6]:
array([[0.0000000e+00, 0.0000000e+00, 6.1900000e+02, ..., 1.0000000e+00,
        1.0000000e+00, 1.0134888e+05],
       [0.0000000e+00, 1.0000000e+00, 6.0800000e+02, ..., 0.0000000e+00,
        1.0000000e+00, 1.1254258e+05],
       [0.0000000e+00, 0.0000000e+00, 5.0200000e+02, ..., 1.0000000e+00,
        0.0000000e+00, 1.1393157e+05],
       ...,
       [0.0000000e+00, 0.0000000e+00, 7.0900000e+02, ..., 0.0000000e+00,
        1.0000000e+00, 4.2085580e+04],
       [1.0000000e+00, 0.0000000e+00, 7.7200000e+02, ..., 1.0000000e+00,
        0.0000000e+00, 9.2888520e+04],
       [0.0000000e+00, 0.0000000e+00, 7.9200000e+02, ..., 1.0000000e+00,
        0.0000000e+00, 3.8190780e+04]])

Assigning training set and test set

In [7]:
# Splitting the dataset into the Training set and Test set

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)

Preprocessing

We are going to fitting and transforming StandardScaler method on both the training. In order to make our model work on test data, we have to standardize our scaling so we will use the same fitted method to transform/scale test data.

Standardize features by removing the mean and scaling to unit variance

The standard score of a sample x is calculated as:

z = (x - u) / s

where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.

In [8]:
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)

Now, the preprocessing on our data is done. We will start building our neural network model. The library we use to build our NN model is Keras. Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation.

We need Sequential module for initializing NN and Dense module to add Hidden Layers.

In [9]:
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
D:\ANOCONDA\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
In [10]:
#Initializing Neural Network
classifier = Sequential()

Adding layers to the neural network. Which activation function should be used is critical task. Here we are using rectifier(relu) function in our hidden layer and Sigmoid function in our output layer as we want binary result from output layer but if the number of categories in output layer is more than 2 then use SoftMax function.

In [11]:
# Adding the input layer and the first hidden layer
classifier.add(Dense(activation="relu", input_dim=11, units=6, kernel_initializer="uniform"))
# Adding the second hidden layer
classifier.add(Dense(activation="relu", units=6, kernel_initializer="uniform"))
# Adding the output layer
classifier.add(Dense(activation="sigmoid", units=1, kernel_initializer="uniform"))

Till now we have added multiple layers to out classifier now let’s compile them which can be done using compile method. Arguments added in final compilation will control whole neural network so be careful on this step.

In [12]:
# Compiling Neural Network
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

We will now train our model on training data but still one thing is remaining. We use fit method to the fit our model In previous some steps I said that we will be optimizing our weights to improve model efficiency so when are we updating out weights? Batch size is used to specify the number of observation after which you want to update weight. Epoch is nothing but the total number of iterations.

In [13]:
# Fitting our model 
classifier.fit(X_train, y_train, batch_size = 10, epochs = 100)
Epoch 1/100
8000/8000 [==============================] - 2s 202us/step - loss: 0.4914 - acc: 0.8006
Epoch 2/100
8000/8000 [==============================] - 1s 146us/step - loss: 0.4090 - acc: 0.8232
Epoch 3/100
8000/8000 [==============================] - 1s 143us/step - loss: 0.3953 - acc: 0.8291
Epoch 4/100
8000/8000 [==============================] - 1s 148us/step - loss: 0.3854 - acc: 0.8310
Epoch 5/100
8000/8000 [==============================] - 1s 153us/step - loss: 0.3782 - acc: 0.8322
Epoch 6/100
8000/8000 [==============================] - 1s 146us/step - loss: 0.3724 - acc: 0.8431
Epoch 7/100
8000/8000 [==============================] - 1s 142us/step - loss: 0.3688 - acc: 0.8449
Epoch 8/100
8000/8000 [==============================] - 1s 144us/step - loss: 0.3655 - acc: 0.8459
Epoch 9/100
8000/8000 [==============================] - 1s 145us/step - loss: 0.3628 - acc: 0.8506
Epoch 10/100
8000/8000 [==============================] - 1s 144us/step - loss: 0.3599 - acc: 0.8527
Epoch 11/100
8000/8000 [==============================] - 1s 145us/step - loss: 0.3571 - acc: 0.8526
Epoch 12/100
8000/8000 [==============================] - 1s 145us/step - loss: 0.3556 - acc: 0.8530
Epoch 13/100
8000/8000 [==============================] - 1s 145us/step - loss: 0.3544 - acc: 0.8531
Epoch 14/100
8000/8000 [==============================] - 1s 143us/step - loss: 0.3532 - acc: 0.8550
Epoch 15/100
8000/8000 [==============================] - 1s 143us/step - loss: 0.3509 - acc: 0.8560
Epoch 16/100
8000/8000 [==============================] - 1s 144us/step - loss: 0.3500 - acc: 0.8589
Epoch 17/100
8000/8000 [==============================] - 1s 143us/step - loss: 0.3500 - acc: 0.8576
Epoch 18/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3496 - acc: 0.8596
Epoch 19/100
8000/8000 [==============================] - 1s 152us/step - loss: 0.3485 - acc: 0.8572
Epoch 20/100
8000/8000 [==============================] - 1s 143us/step - loss: 0.3482 - acc: 0.8571
Epoch 21/100
8000/8000 [==============================] - 1s 143us/step - loss: 0.3476 - acc: 0.8587
Epoch 22/100
8000/8000 [==============================] - 1s 143us/step - loss: 0.3466 - acc: 0.8547
Epoch 23/100
8000/8000 [==============================] - 1s 146us/step - loss: 0.3470 - acc: 0.8585
Epoch 24/100
8000/8000 [==============================] - 1s 146us/step - loss: 0.3458 - acc: 0.8584
Epoch 25/100
8000/8000 [==============================] - 1s 146us/step - loss: 0.3458 - acc: 0.8590
Epoch 26/100
8000/8000 [==============================] - 1s 164us/step - loss: 0.3466 - acc: 0.8602
Epoch 27/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3447 - acc: 0.8591
Epoch 28/100
8000/8000 [==============================] - 1s 148us/step - loss: 0.3446 - acc: 0.8581
Epoch 29/100
8000/8000 [==============================] - 1s 145us/step - loss: 0.3439 - acc: 0.8590
Epoch 30/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3450 - acc: 0.8581
Epoch 31/100
8000/8000 [==============================] - 1s 153us/step - loss: 0.3445 - acc: 0.8592
Epoch 32/100
8000/8000 [==============================] - 1s 162us/step - loss: 0.3435 - acc: 0.8592
Epoch 33/100
8000/8000 [==============================] - 1s 158us/step - loss: 0.3424 - acc: 0.8607
Epoch 34/100
8000/8000 [==============================] - 1s 147us/step - loss: 0.3432 - acc: 0.8580
Epoch 35/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3433 - acc: 0.8597
Epoch 36/100
8000/8000 [==============================] - 1s 151us/step - loss: 0.3431 - acc: 0.8592
Epoch 37/100
8000/8000 [==============================] - 1s 147us/step - loss: 0.3430 - acc: 0.8609
Epoch 38/100
8000/8000 [==============================] - 1s 146us/step - loss: 0.3428 - acc: 0.8579
Epoch 39/100
8000/8000 [==============================] - 1s 153us/step - loss: 0.3424 - acc: 0.8595
Epoch 40/100
8000/8000 [==============================] - 1s 148us/step - loss: 0.3421 - acc: 0.8585
Epoch 41/100
8000/8000 [==============================] - 1s 145us/step - loss: 0.3419 - acc: 0.8592
Epoch 42/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3421 - acc: 0.8597
Epoch 43/100
8000/8000 [==============================] - 1s 156us/step - loss: 0.3419 - acc: 0.8582
Epoch 44/100
8000/8000 [==============================] - 1s 147us/step - loss: 0.3403 - acc: 0.8609
Epoch 45/100
8000/8000 [==============================] - 1s 184us/step - loss: 0.3414 - acc: 0.8569
Epoch 46/100
8000/8000 [==============================] - 1s 157us/step - loss: 0.3413 - acc: 0.8585
Epoch 47/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3414 - acc: 0.8606
Epoch 48/100
8000/8000 [==============================] - 1s 152us/step - loss: 0.3419 - acc: 0.8589
Epoch 49/100
8000/8000 [==============================] - 1s 147us/step - loss: 0.3411 - acc: 0.8589
Epoch 50/100
8000/8000 [==============================] - 1s 147us/step - loss: 0.3419 - acc: 0.8575
Epoch 51/100
8000/8000 [==============================] - 1s 156us/step - loss: 0.3403 - acc: 0.8617
Epoch 52/100
8000/8000 [==============================] - 1s 157us/step - loss: 0.3400 - acc: 0.8597
Epoch 53/100
8000/8000 [==============================] - 1s 155us/step - loss: 0.3409 - acc: 0.8596
Epoch 54/100
8000/8000 [==============================] - 1s 157us/step - loss: 0.3402 - acc: 0.8611
Epoch 55/100
8000/8000 [==============================] - 1s 158us/step - loss: 0.3401 - acc: 0.8591
Epoch 56/100
8000/8000 [==============================] - 1s 148us/step - loss: 0.3403 - acc: 0.8582
Epoch 57/100
8000/8000 [==============================] - 1s 146us/step - loss: 0.3404 - acc: 0.8622
Epoch 58/100
8000/8000 [==============================] - 1s 163us/step - loss: 0.3406 - acc: 0.8577
Epoch 59/100
8000/8000 [==============================] - 1s 160us/step - loss: 0.3394 - acc: 0.8592
Epoch 60/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3393 - acc: 0.8621
Epoch 61/100
8000/8000 [==============================] - 1s 148us/step - loss: 0.3411 - acc: 0.8601
Epoch 62/100
8000/8000 [==============================] - 1s 152us/step - loss: 0.3397 - acc: 0.8589
Epoch 63/100
8000/8000 [==============================] - 1s 160us/step - loss: 0.3400 - acc: 0.8590
Epoch 64/100
8000/8000 [==============================] - 1s 151us/step - loss: 0.3405 - acc: 0.8590
Epoch 65/100
8000/8000 [==============================] - 1s 147us/step - loss: 0.3395 - acc: 0.8606
Epoch 66/100
8000/8000 [==============================] - 1s 149us/step - loss: 0.3392 - acc: 0.8579
Epoch 67/100
8000/8000 [==============================] - 1s 148us/step - loss: 0.3397 - acc: 0.8590
Epoch 68/100
8000/8000 [==============================] - 1s 151us/step - loss: 0.3396 - acc: 0.8602
Epoch 69/100
8000/8000 [==============================] - 1s 162us/step - loss: 0.3390 - acc: 0.8610
Epoch 70/100
8000/8000 [==============================] - 1s 156us/step - loss: 0.3395 - acc: 0.8595
Epoch 71/100
8000/8000 [==============================] - 1s 165us/step - loss: 0.3389 - acc: 0.8597
Epoch 72/100
8000/8000 [==============================] - 1s 162us/step - loss: 0.3391 - acc: 0.8601
Epoch 73/100
8000/8000 [==============================] - 1s 156us/step - loss: 0.3389 - acc: 0.8605
Epoch 74/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3395 - acc: 0.8600
Epoch 75/100
8000/8000 [==============================] - 1s 151us/step - loss: 0.3391 - acc: 0.8600
Epoch 76/100
8000/8000 [==============================] - 1s 147us/step - loss: 0.3397 - acc: 0.8621
Epoch 77/100
8000/8000 [==============================] - 1s 148us/step - loss: 0.3385 - acc: 0.8614
Epoch 78/100
8000/8000 [==============================] - 1s 147us/step - loss: 0.3393 - acc: 0.8585
Epoch 79/100
8000/8000 [==============================] - 1s 152us/step - loss: 0.3390 - acc: 0.8597
Epoch 80/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3388 - acc: 0.8615
Epoch 81/100
8000/8000 [==============================] - 1s 147us/step - loss: 0.3395 - acc: 0.8622
Epoch 82/100
8000/8000 [==============================] - 1s 144us/step - loss: 0.3387 - acc: 0.8597
Epoch 83/100
8000/8000 [==============================] - 1s 148us/step - loss: 0.3385 - acc: 0.8602
Epoch 84/100
8000/8000 [==============================] - 1s 173us/step - loss: 0.3388 - acc: 0.8589
Epoch 85/100
8000/8000 [==============================] - 1s 166us/step - loss: 0.3389 - acc: 0.8617
Epoch 86/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3385 - acc: 0.8625
Epoch 87/100
8000/8000 [==============================] - 1s 154us/step - loss: 0.3379 - acc: 0.8579
Epoch 88/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3391 - acc: 0.8601
Epoch 89/100
8000/8000 [==============================] - 1s 147us/step - loss: 0.3378 - acc: 0.8601
Epoch 90/100
8000/8000 [==============================] - 1s 148us/step - loss: 0.3388 - acc: 0.8611
Epoch 91/100
8000/8000 [==============================] - 1s 153us/step - loss: 0.3384 - acc: 0.8596
Epoch 92/100
8000/8000 [==============================] - 1s 153us/step - loss: 0.3381 - acc: 0.8596
Epoch 93/100
8000/8000 [==============================] - 1s 150us/step - loss: 0.3380 - acc: 0.8576
Epoch 94/100
8000/8000 [==============================] - 1s 145us/step - loss: 0.3379 - acc: 0.8582
Epoch 95/100
8000/8000 [==============================] - 1s 146us/step - loss: 0.3376 - acc: 0.8591
Epoch 96/100
8000/8000 [==============================] - 1s 145us/step - loss: 0.3374 - acc: 0.8619
Epoch 97/100
8000/8000 [==============================] - 1s 157us/step - loss: 0.3370 - acc: 0.8626
Epoch 98/100
8000/8000 [==============================] - 1s 156us/step - loss: 0.3355 - acc: 0.8580
Epoch 99/100
8000/8000 [==============================] - 1s 145us/step - loss: 0.3361 - acc: 0.8625
Epoch 100/100
8000/8000 [==============================] - 1s 145us/step - loss: 0.3354 - acc: 0.8619
Out[13]:
<keras.callbacks.History at 0x232a0752320>

Predicting on the test data

Predicting the test set result. The prediction result will give you probability of the customer leaving the company. We will convert that probability into binary 0 and 1.

In [14]:
# Predicting the Test set results
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)

Confusion matrix of test set and prediction accuracy

This is the final step where we are evaluating our model performance. We already have original results and thus we can build confusion matrix to check the accuracy of model.

In [15]:
# Creating the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
cm
Out[15]:
array([[1477,  111],
       [ 186,  226]], dtype=int64)
In [16]:
test_accuracy = np.sum([cm[0,0], cm[1,1]])/np.sum(cm)
test_accuracy
Out[16]:
0.8515

We achieved 85.15% accuracy on the test set which is quite good.

Adjusting the neural network

Let's changing the structure of the network to see what happens. Let's add another hidden layer.

In [18]:
#Initializing Neural Network
classifier2 = Sequential()

# Adding the input layer and the first hidden layer
classifier2.add(Dense(activation="relu", input_dim=11, units=8, kernel_initializer="uniform"))
# Adding the second hidden layer
classifier2.add(Dense(activation="relu", units=6, kernel_initializer="uniform"))
# Adding the third hidden layer
classifier2.add(Dense(activation="relu", units=6, kernel_initializer="uniform"))
# Adding the output layer
classifier2.add(Dense(activation="sigmoid", units=1, kernel_initializer="uniform"))

# Compiling Neural Network
classifier2.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

# Fitting our model 
classifier2.fit(X_train, y_train, batch_size = 10, epochs = 100)

# Predicting the Test set results
y_pred2 = classifier2.predict(X_test)
y_pred2 = (y_pred2 > 0.5)
cm2 = confusion_matrix(y_test, y_pred2)
test_accuracy2 = np.sum([cm2[0,0], cm2[1,1]])/np.sum(cm2)
test_accuracy2
Epoch 1/100
8000/8000 [==============================] - 2s 280us/step - loss: 0.4712 - acc: 0.7961
Epoch 2/100
8000/8000 [==============================] - 1s 186us/step - loss: 0.4183 - acc: 0.7969
Epoch 3/100
8000/8000 [==============================] - 2s 226us/step - loss: 0.4090 - acc: 0.8270
Epoch 4/100
8000/8000 [==============================] - 2s 196us/step - loss: 0.3973 - acc: 0.8306
Epoch 5/100
8000/8000 [==============================] - 2s 196us/step - loss: 0.3861 - acc: 0.8331
Epoch 6/100
8000/8000 [==============================] - 2s 203us/step - loss: 0.3772 - acc: 0.8431
Epoch 7/100
8000/8000 [==============================] - 2s 195us/step - loss: 0.3696 - acc: 0.8480
Epoch 8/100
8000/8000 [==============================] - 2s 191us/step - loss: 0.3659 - acc: 0.8495
Epoch 9/100
8000/8000 [==============================] - 2s 194us/step - loss: 0.3629 - acc: 0.8541
Epoch 10/100
8000/8000 [==============================] - 2s 198us/step - loss: 0.3613 - acc: 0.8555
Epoch 11/100
8000/8000 [==============================] - 2s 189us/step - loss: 0.3595 - acc: 0.8529
Epoch 12/100
8000/8000 [==============================] - 2s 192us/step - loss: 0.3587 - acc: 0.8561
Epoch 13/100
8000/8000 [==============================] - 2s 210us/step - loss: 0.3570 - acc: 0.8552
Epoch 14/100
8000/8000 [==============================] - 2s 217us/step - loss: 0.3554 - acc: 0.8560
Epoch 15/100
8000/8000 [==============================] - 2s 189us/step - loss: 0.3546 - acc: 0.8561
Epoch 16/100
8000/8000 [==============================] - 2s 191us/step - loss: 0.3544 - acc: 0.8564
Epoch 17/100
8000/8000 [==============================] - 2s 196us/step - loss: 0.3532 - acc: 0.8571
Epoch 18/100
8000/8000 [==============================] - 2s 202us/step - loss: 0.3517 - acc: 0.8595
Epoch 19/100
8000/8000 [==============================] - 2s 190us/step - loss: 0.3513 - acc: 0.8582
Epoch 20/100
8000/8000 [==============================] - 2s 189us/step - loss: 0.3516 - acc: 0.8562
Epoch 21/100
8000/8000 [==============================] - 2s 211us/step - loss: 0.3506 - acc: 0.8579
Epoch 22/100
8000/8000 [==============================] - 2s 204us/step - loss: 0.3485 - acc: 0.8589
Epoch 23/100
8000/8000 [==============================] - 2s 208us/step - loss: 0.3500 - acc: 0.8586
Epoch 24/100
8000/8000 [==============================] - 2s 202us/step - loss: 0.3505 - acc: 0.8579
Epoch 25/100
8000/8000 [==============================] - 2s 215us/step - loss: 0.3487 - acc: 0.8581
Epoch 26/100
8000/8000 [==============================] - 2s 210us/step - loss: 0.3484 - acc: 0.8594
Epoch 27/100
8000/8000 [==============================] - 2s 195us/step - loss: 0.3488 - acc: 0.8602
Epoch 28/100
8000/8000 [==============================] - 2s 197us/step - loss: 0.3486 - acc: 0.8574
Epoch 29/100
8000/8000 [==============================] - 2s 196us/step - loss: 0.3477 - acc: 0.8592
Epoch 30/100
8000/8000 [==============================] - 2s 189us/step - loss: 0.3480 - acc: 0.8590
Epoch 31/100
8000/8000 [==============================] - 2s 195us/step - loss: 0.3473 - acc: 0.8589
Epoch 32/100
8000/8000 [==============================] - 2s 201us/step - loss: 0.3458 - acc: 0.8587
Epoch 33/100
8000/8000 [==============================] - 2s 205us/step - loss: 0.3461 - acc: 0.8591
Epoch 34/100
8000/8000 [==============================] - 2s 200us/step - loss: 0.3452 - acc: 0.8616
Epoch 35/100
8000/8000 [==============================] - 2s 191us/step - loss: 0.3453 - acc: 0.8560
Epoch 36/100
8000/8000 [==============================] - 2s 202us/step - loss: 0.3454 - acc: 0.8581
Epoch 37/100
8000/8000 [==============================] - 2s 189us/step - loss: 0.3454 - acc: 0.8599
Epoch 38/100
8000/8000 [==============================] - 2s 192us/step - loss: 0.3454 - acc: 0.8585
Epoch 39/100
8000/8000 [==============================] - 2s 196us/step - loss: 0.3441 - acc: 0.8604
Epoch 40/100
8000/8000 [==============================] - 2s 197us/step - loss: 0.3443 - acc: 0.8596
Epoch 41/100
8000/8000 [==============================] - 2s 194us/step - loss: 0.3433 - acc: 0.8582
Epoch 42/100
8000/8000 [==============================] - 1s 187us/step - loss: 0.3435 - acc: 0.8586
Epoch 43/100
8000/8000 [==============================] - 2s 215us/step - loss: 0.3438 - acc: 0.8594
Epoch 44/100
8000/8000 [==============================] - 2s 206us/step - loss: 0.3436 - acc: 0.8590
Epoch 45/100
8000/8000 [==============================] - 2s 189us/step - loss: 0.3431 - acc: 0.8612
Epoch 46/100
8000/8000 [==============================] - 1s 187us/step - loss: 0.3431 - acc: 0.8599
Epoch 47/100
8000/8000 [==============================] - 2s 201us/step - loss: 0.3440 - acc: 0.8596
Epoch 48/100
8000/8000 [==============================] - 2s 190us/step - loss: 0.3427 - acc: 0.8602
Epoch 49/100
8000/8000 [==============================] - 2s 188us/step - loss: 0.3433 - acc: 0.8606
Epoch 50/100
8000/8000 [==============================] - 2s 191us/step - loss: 0.3415 - acc: 0.8621
Epoch 51/100
8000/8000 [==============================] - 2s 201us/step - loss: 0.3430 - acc: 0.8587
Epoch 52/100
8000/8000 [==============================] - 2s 188us/step - loss: 0.3418 - acc: 0.8600
Epoch 53/100
8000/8000 [==============================] - 2s 196us/step - loss: 0.3427 - acc: 0.8611
Epoch 54/100
8000/8000 [==============================] - 2s 217us/step - loss: 0.3415 - acc: 0.8612
Epoch 55/100
8000/8000 [==============================] - 2s 194us/step - loss: 0.3423 - acc: 0.8599
Epoch 56/100
8000/8000 [==============================] - 2s 189us/step - loss: 0.3421 - acc: 0.8604
Epoch 57/100
8000/8000 [==============================] - 2s 191us/step - loss: 0.3417 - acc: 0.8609 0s - loss: 0.3412 - a
Epoch 58/100
8000/8000 [==============================] - 2s 202us/step - loss: 0.3426 - acc: 0.8599
Epoch 59/100
8000/8000 [==============================] - 2s 188us/step - loss: 0.3414 - acc: 0.8605
Epoch 60/100
8000/8000 [==============================] - 2s 191us/step - loss: 0.3424 - acc: 0.8607
Epoch 61/100
8000/8000 [==============================] - 2s 193us/step - loss: 0.3413 - acc: 0.8616
Epoch 62/100
8000/8000 [==============================] - 2s 199us/step - loss: 0.3418 - acc: 0.8611
Epoch 63/100
8000/8000 [==============================] - 2s 192us/step - loss: 0.3408 - acc: 0.8604
Epoch 64/100
8000/8000 [==============================] - 2s 210us/step - loss: 0.3411 - acc: 0.8626
Epoch 65/100
8000/8000 [==============================] - 2s 197us/step - loss: 0.3418 - acc: 0.8625
Epoch 66/100
8000/8000 [==============================] - 2s 196us/step - loss: 0.3411 - acc: 0.8621
Epoch 67/100
8000/8000 [==============================] - 2s 191us/step - loss: 0.3412 - acc: 0.8591
Epoch 68/100
8000/8000 [==============================] - 2s 188us/step - loss: 0.3404 - acc: 0.8607
Epoch 69/100
8000/8000 [==============================] - 2s 197us/step - loss: 0.3406 - acc: 0.8619
Epoch 70/100
8000/8000 [==============================] - 2s 192us/step - loss: 0.3407 - acc: 0.8590
Epoch 71/100
8000/8000 [==============================] - 2s 188us/step - loss: 0.3417 - acc: 0.8601
Epoch 72/100
8000/8000 [==============================] - 2s 189us/step - loss: 0.3414 - acc: 0.8615
Epoch 73/100
8000/8000 [==============================] - 2s 208us/step - loss: 0.3407 - acc: 0.8607
Epoch 74/100
8000/8000 [==============================] - 2s 222us/step - loss: 0.3413 - acc: 0.8615
Epoch 75/100
8000/8000 [==============================] - 2s 210us/step - loss: 0.3411 - acc: 0.8620
Epoch 76/100
8000/8000 [==============================] - 2s 197us/step - loss: 0.3394 - acc: 0.8610
Epoch 77/100
8000/8000 [==============================] - 2s 203us/step - loss: 0.3405 - acc: 0.8626
Epoch 78/100
8000/8000 [==============================] - 2s 195us/step - loss: 0.3403 - acc: 0.8586
Epoch 79/100
8000/8000 [==============================] - 2s 201us/step - loss: 0.3399 - acc: 0.8624
Epoch 80/100
8000/8000 [==============================] - 2s 204us/step - loss: 0.3404 - acc: 0.8600 0s - loss: 0.3438 - a
Epoch 81/100
8000/8000 [==============================] - 2s 196us/step - loss: 0.3392 - acc: 0.8632
Epoch 82/100
8000/8000 [==============================] - 2s 200us/step - loss: 0.3399 - acc: 0.8617
Epoch 83/100
8000/8000 [==============================] - 2s 195us/step - loss: 0.3395 - acc: 0.8622
Epoch 84/100
8000/8000 [==============================] - 2s 222us/step - loss: 0.3389 - acc: 0.8611
Epoch 85/100
8000/8000 [==============================] - 2s 198us/step - loss: 0.3401 - acc: 0.8606
Epoch 86/100
8000/8000 [==============================] - 2s 191us/step - loss: 0.3397 - acc: 0.8631
Epoch 87/100
8000/8000 [==============================] - 2s 198us/step - loss: 0.3388 - acc: 0.8626
Epoch 88/100
8000/8000 [==============================] - 2s 203us/step - loss: 0.3404 - acc: 0.8616
Epoch 89/100
8000/8000 [==============================] - 2s 193us/step - loss: 0.3398 - acc: 0.8629
Epoch 90/100
8000/8000 [==============================] - 2s 192us/step - loss: 0.3394 - acc: 0.8609
Epoch 91/100
8000/8000 [==============================] - 2s 202us/step - loss: 0.3397 - acc: 0.8607
Epoch 92/100
8000/8000 [==============================] - 2s 197us/step - loss: 0.3396 - acc: 0.8611
Epoch 93/100
8000/8000 [==============================] - 2s 194us/step - loss: 0.3391 - acc: 0.8632
Epoch 94/100
8000/8000 [==============================] - 2s 208us/step - loss: 0.3395 - acc: 0.8619
Epoch 95/100
8000/8000 [==============================] - 2s 219us/step - loss: 0.3390 - acc: 0.8614
Epoch 96/100
8000/8000 [==============================] - 2s 192us/step - loss: 0.3389 - acc: 0.8632
Epoch 97/100
8000/8000 [==============================] - 2s 192us/step - loss: 0.3394 - acc: 0.8634
Epoch 98/100
8000/8000 [==============================] - 2s 196us/step - loss: 0.3396 - acc: 0.8620
Epoch 99/100
8000/8000 [==============================] - 2s 200us/step - loss: 0.3376 - acc: 0.8625
Epoch 100/100
8000/8000 [==============================] - 2s 190us/step - loss: 0.3392 - acc: 0.8605
Out[18]:
0.8555

Now the prediction accuracy on test set has increased a little bit from 85.15% to 85.55%.

Adjusting the architect of the neural network may achieve better results.



Comments

comments powered by Disqus