]TensorFlow2[ Estimator를 사용한 비선형회귀

통계/Tensorflow2.0 예제

2020. 11. 28.

# !pip install tensorflow==2.2.0

Import the required modules

import numpy as np
import tensorflow as tf
from tensorflow import keras as ks
from keras.layers import Dense
from sklearn.model_selection import train_test_split

Model configurations

base_config = {
    'model_path': 'model_path',
    'model_version': '0', 
    'dataset_size': 100000,
    'test_ratio': 0.33,
    'activation': 'relu',
    'hiddens': [1],
    'epochs': 25,
    'batch_size': 1000,
    'loss_function': 'mean_squared_error',
    'optimizer': 'adam',
}

from copy import copy

config_hidden_1 = copy(base_config)
config_hidden_1['model_version'] = '1'
config_hidden_1['hiddens'] = [11, 1]

config_hidden_2 = copy(base_config)
config_hidden_2['model_version'] = '2'
config_hidden_2['hiddens'] = [33, 11, 1]

config_hidden_3 = copy(base_config)
config_hidden_3['model_version'] = '3'
config_hidden_3['hiddens'] = [99, 33, 11, 1]

config_hidden_4 = copy(base_config)
config_hidden_4['model_version'] = '4'
config_hidden_4['hiddens'] = [297, 99, 33, 11, 1]

모델 생성에 쓰여질 값들은 외부에서 수정할 수 있도록 하면 재사용과 테스트가 편리하다. 가장 기본적인 설정을 base_config에 선언하여, 이것을 상속 및 수정한 값들 config_hidden1, ..., config_hidden4 가 선언됐다. 히든 레이어의 깊이와 형상이 변경되었고, 모델이 저장될 폴더 이름에 쓰일 model_version 값이 바뀌었다.

Define the cubic function

def cubic_generator(a, b, c, d, e):
    def cubic(x):
        '''
        입력: 3차원 벡터
        출력: 입력 벡터에 대한 3차 함수 계산 값
        '''
        x1, x2, x3 = x[0], x[1], x[2]
        return a + b*x1 + c*x2**2 + d*x3**3 + e
    return cubic

cubic_function = cubic_generator(1, 3, 5, 10, 20)

이번 회기에서 사용할 3차원 벡터에 대한 삼차함수를 선언했다.

Create the train & test datasets with the cubic function

xs = np.random.uniform(0, 1, (base_config['dataset_size'], 3))
ys = [cubic_function(x) for x in xs]
x_train, x_test, y_train, y_test = train_test_split(xs, ys, test_size=base_config['test_ratio'])

삼차함수를 사용해서 데이터 셋을 생산한다음 학습 데이터와 테스트 데이터로 쪼갠다.

Define a data feeder

def input_feed_generator(x, y, epochs=base_config['epochs'], shuffle=True, batch_size=base_config['batch_size']):
    def input_feed():
        dataset = tf.data.Dataset.from_tensor_slices((x, y))
        if shuffle:
            dataset = dataset.shuffle(2000)
        dataset = dataset.batch(batch_size).repeat(epochs)
        return dataset
    return input_feed

고수준 텐서플로 API는 학습과 평가 국면에서, 모델에 순전파시킬 tf.data.Dataset 객체를 input_fn에게 요청하여 획득한다. 학습 및 평가 모듈이 데이터 셋을 솎아내는 과정에 관여하지 않도록 책임을 잘 분리한 설계로 여겨진다.

  • tf.data.Dataset.from_tensor_slices((x, y)) 절은 입력과 타겟을 튜플로 묶어서 tf.data.Dataset 객체로 만들었다.

  • dataset.shuffle(n) 절은 데이터 셋을 적절히 섞을 것을 설정하며

  • dataset.bacth(batch_size).repeat(epochs) 절은 지정된 에폭 수에 도달할 때까지 배치 크기만큼 데이터를 뱉어내도록 설정한다.

이제 dataset은 일종의 파이썬 generator로서 호출될 때마다 지정된 배치크기의 데이터 셋을 랜덤 추출해 반환할 것이다. 한 편, 에폭이 도달되면 더 이상 데이터 셋을 뱉지 않을 것이다.

Define the training & testing phase

def train(x_train, y_train, config):
    train_dataset_feed = input_feed_generator(x_train, y_train)
    steps = config['epochs'] / config['batch_size'] * len(x_train)

    model = ks.models.Sequential()
    activation = config['activation']
    hiddens = config['hiddens']

    for i, hidden in enumerate(hiddens):
        if 1 == len(hiddens) or i == 0:
            model.add(Dense(hidden, activation=activation, input_shape=(3,)))
        elif i == len(hiddens)-1:
            model.add(Dense(hidden))
        else:
            model.add(Dense(hidden, activation=activation))

    loss_function = config['loss_function']
    optimizer = config['optimizer']
    model.compile(loss=loss_function, optimizer=optimizer)

    print(model.summary())

    estimator = ks.estimator.model_to_estimator(keras_model=model, 
                                                model_dir=config['model_path'] + config['model_version'])

    estimator.train(input_fn=train_dataset_feed, steps=steps)

    return estimator

def test(estimator, x_test, y_test, config):
    test_dataset_feed = input_feed_generator(x_test, y_test, 
                                             epochs=1, batch_size=1, shuffle=False)
    steps = len(x_test)

    evaluation_result = estimator.evaluate(input_fn=test_dataset_feed, steps=steps)
    print('Fianl evaluation result: {}'.format(evaluation_result))

학습 국면과 평가 국면을 코딩한다.

Do train and evaluate!

estimator = train(x_train, y_train, base_config)
test(estimator, x_test, y_test, base_config)

base_config 설정에 따라 학습하고 평가한다.

Final evaluation result: {'loss': 1.4226116, 'global_step': 68675}

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 1)                 4         
=================================================================
Total params: 4
Trainable params: 4
Non-trainable params: 0
_________________________________________________________________
None
INFO:tensorflow:Using default config.
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using config: {'_model_dir': 'model_path0', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1659: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/tensorflow/python/training/training_util.py:235: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_path0/model.ckpt-67000
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py:1077: get_checkpoint_mtimes (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file utilities to get mtimes.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 67000...
INFO:tensorflow:Saving checkpoints for 67000 into model_path0/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 67000...
INFO:tensorflow:loss = 1.4043052, step = 67000
INFO:tensorflow:global_step/sec: 508.11
INFO:tensorflow:loss = 1.4552209, step = 67100 (0.198 sec)
INFO:tensorflow:global_step/sec: 533.608
INFO:tensorflow:loss = 1.3765242, step = 67200 (0.187 sec)
INFO:tensorflow:global_step/sec: 561.591
INFO:tensorflow:loss = 1.4615049, step = 67300 (0.179 sec)
INFO:tensorflow:global_step/sec: 460.96
INFO:tensorflow:loss = 1.4333134, step = 67400 (0.217 sec)
INFO:tensorflow:global_step/sec: 400.201
INFO:tensorflow:loss = 1.4247539, step = 67500 (0.250 sec)
INFO:tensorflow:global_step/sec: 372.794
INFO:tensorflow:loss = 1.4732596, step = 67600 (0.268 sec)
INFO:tensorflow:global_step/sec: 376.162
INFO:tensorflow:loss = 1.459026, step = 67700 (0.266 sec)
INFO:tensorflow:global_step/sec: 372.418
INFO:tensorflow:loss = 1.4194301, step = 67800 (0.268 sec)
INFO:tensorflow:global_step/sec: 419.744
INFO:tensorflow:loss = 1.3952932, step = 67900 (0.238 sec)
INFO:tensorflow:global_step/sec: 484.038
INFO:tensorflow:loss = 1.4418167, step = 68000 (0.206 sec)
INFO:tensorflow:global_step/sec: 539.521
INFO:tensorflow:loss = 1.384494, step = 68100 (0.185 sec)
INFO:tensorflow:global_step/sec: 599.757
INFO:tensorflow:loss = 1.416843, step = 68200 (0.167 sec)
INFO:tensorflow:global_step/sec: 562.237
INFO:tensorflow:loss = 1.423495, step = 68300 (0.178 sec)
INFO:tensorflow:global_step/sec: 463.68
INFO:tensorflow:loss = 1.3321165, step = 68400 (0.215 sec)
INFO:tensorflow:global_step/sec: 573.673
INFO:tensorflow:loss = 1.4411297, step = 68500 (0.174 sec)
INFO:tensorflow:global_step/sec: 490.223
INFO:tensorflow:loss = 1.401835, step = 68600 (0.204 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 68675...
INFO:tensorflow:Saving checkpoints for 68675 into model_path0/model.ckpt.
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/tensorflow/python/training/saver.py:969: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 68675...
INFO:tensorflow:Loss for final step: 1.3499732.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2020-11-28T00:38:02Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_path0/model.ckpt-68675
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [3300/33000]
INFO:tensorflow:Evaluation [6600/33000]
INFO:tensorflow:Evaluation [9900/33000]
INFO:tensorflow:Evaluation [13200/33000]
INFO:tensorflow:Evaluation [16500/33000]
INFO:tensorflow:Evaluation [19800/33000]
INFO:tensorflow:Evaluation [23100/33000]
INFO:tensorflow:Evaluation [26400/33000]
INFO:tensorflow:Evaluation [29700/33000]
INFO:tensorflow:Evaluation [33000/33000]
INFO:tensorflow:Inference Time : 9.25867s
INFO:tensorflow:Finished evaluation at 2020-11-28-00:38:11
INFO:tensorflow:Saving dict for global step 68675: global_step = 68675, loss = 1.4226116
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 68675: model_path0/model.ckpt-68675
Fianl evaluation result: {'loss': 1.4226116, 'global_step': 68675}
estimator = train(x_train, y_train, config_hidden_1)
test(estimator, x_test, y_test, config_hidden_1)

config_hidden_1 설정에 따라 학습하고 평가한다.

Final evaluation result: {'loss': 1.4228917, 'global_step': 26800}

Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 11)                44        
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 12        
=================================================================
Total params: 56
Trainable params: 56
Non-trainable params: 0
_________________________________________________________________
None
INFO:tensorflow:Using default config.
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using config: {'_model_dir': 'model_path1', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_path1/model.ckpt-25125
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 25125...
INFO:tensorflow:Saving checkpoints for 25125 into model_path1/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 25125...
INFO:tensorflow:loss = 1.3151879, step = 25125
INFO:tensorflow:global_step/sec: 435.255
INFO:tensorflow:loss = 1.4675959, step = 25225 (0.231 sec)
INFO:tensorflow:global_step/sec: 413.03
INFO:tensorflow:loss = 1.4413639, step = 25325 (0.242 sec)
INFO:tensorflow:global_step/sec: 481.332
INFO:tensorflow:loss = 1.5176368, step = 25425 (0.207 sec)
INFO:tensorflow:global_step/sec: 549.459
INFO:tensorflow:loss = 1.491032, step = 25525 (0.182 sec)
INFO:tensorflow:global_step/sec: 484.943
INFO:tensorflow:loss = 1.4394522, step = 25625 (0.206 sec)
INFO:tensorflow:global_step/sec: 515.32
INFO:tensorflow:loss = 1.3918484, step = 25725 (0.194 sec)
INFO:tensorflow:global_step/sec: 412.318
INFO:tensorflow:loss = 1.4667379, step = 25825 (0.243 sec)
INFO:tensorflow:global_step/sec: 411.204
INFO:tensorflow:loss = 1.4815986, step = 25925 (0.245 sec)
INFO:tensorflow:global_step/sec: 526.099
INFO:tensorflow:loss = 1.3581336, step = 26025 (0.189 sec)
INFO:tensorflow:global_step/sec: 490.472
INFO:tensorflow:loss = 1.3903273, step = 26125 (0.204 sec)
INFO:tensorflow:global_step/sec: 477.556
INFO:tensorflow:loss = 1.4734547, step = 26225 (0.210 sec)
INFO:tensorflow:global_step/sec: 424.673
INFO:tensorflow:loss = 1.4226547, step = 26325 (0.235 sec)
INFO:tensorflow:global_step/sec: 475.406
INFO:tensorflow:loss = 1.3898586, step = 26425 (0.211 sec)
INFO:tensorflow:global_step/sec: 499.118
INFO:tensorflow:loss = 1.4323798, step = 26525 (0.200 sec)
INFO:tensorflow:global_step/sec: 443.348
INFO:tensorflow:loss = 1.4899298, step = 26625 (0.225 sec)
INFO:tensorflow:global_step/sec: 527.562
INFO:tensorflow:loss = 1.4770709, step = 26725 (0.190 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 26800...
INFO:tensorflow:Saving checkpoints for 26800 into model_path1/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 26800...
INFO:tensorflow:Loss for final step: 1.3632728.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2020-11-28T00:38:16Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_path1/model.ckpt-26800
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [3300/33000]
INFO:tensorflow:Evaluation [6600/33000]
INFO:tensorflow:Evaluation [9900/33000]
INFO:tensorflow:Evaluation [13200/33000]
INFO:tensorflow:Evaluation [16500/33000]
INFO:tensorflow:Evaluation [19800/33000]
INFO:tensorflow:Evaluation [23100/33000]
INFO:tensorflow:Evaluation [26400/33000]
INFO:tensorflow:Evaluation [29700/33000]
INFO:tensorflow:Evaluation [33000/33000]
INFO:tensorflow:Inference Time : 9.58498s
INFO:tensorflow:Finished evaluation at 2020-11-28-00:38:25
INFO:tensorflow:Saving dict for global step 26800: global_step = 26800, loss = 1.4228917
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 26800: model_path1/model.ckpt-26800
Fianl evaluation result: {'loss': 1.4228917, 'global_step': 26800}
estimator = train(x_train, y_train, config_hidden_2)
test(estimator, x_test, y_test, config_hidden_2)

config_hidden_2 설정에 따라 학습하고 평가한다.

Final evaluation result: {'loss': 0.017187782, 'global_step': 18425}

Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_3 (Dense)              (None, 33)                132       
_________________________________________________________________
dense_4 (Dense)              (None, 11)                374       
_________________________________________________________________
dense_5 (Dense)              (None, 1)                 12        
=================================================================
Total params: 518
Trainable params: 518
Non-trainable params: 0
_________________________________________________________________
None
INFO:tensorflow:Using default config.
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using config: {'_model_dir': 'model_path2', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_path2/model.ckpt-16750
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 16750...
INFO:tensorflow:Saving checkpoints for 16750 into model_path2/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 16750...
INFO:tensorflow:loss = 0.01883104, step = 16750
INFO:tensorflow:global_step/sec: 358.105
INFO:tensorflow:loss = 0.015604466, step = 16850 (0.280 sec)
INFO:tensorflow:global_step/sec: 524.241
INFO:tensorflow:loss = 0.016864803, step = 16950 (0.191 sec)
INFO:tensorflow:global_step/sec: 449.886
INFO:tensorflow:loss = 0.017450226, step = 17050 (0.222 sec)
INFO:tensorflow:global_step/sec: 423.148
INFO:tensorflow:loss = 0.017771622, step = 17150 (0.236 sec)
INFO:tensorflow:global_step/sec: 396.615
INFO:tensorflow:loss = 0.016049348, step = 17250 (0.253 sec)
INFO:tensorflow:global_step/sec: 395.535
INFO:tensorflow:loss = 0.018908497, step = 17350 (0.252 sec)
INFO:tensorflow:global_step/sec: 486.993
INFO:tensorflow:loss = 0.017589359, step = 17450 (0.205 sec)
INFO:tensorflow:global_step/sec: 498.276
INFO:tensorflow:loss = 0.016991638, step = 17550 (0.201 sec)
INFO:tensorflow:global_step/sec: 464.048
INFO:tensorflow:loss = 0.017221909, step = 17650 (0.217 sec)
INFO:tensorflow:global_step/sec: 462.997
INFO:tensorflow:loss = 0.017434902, step = 17750 (0.215 sec)
INFO:tensorflow:global_step/sec: 473.943
INFO:tensorflow:loss = 0.01652798, step = 17850 (0.211 sec)
INFO:tensorflow:global_step/sec: 514.213
INFO:tensorflow:loss = 0.017366089, step = 17950 (0.195 sec)
INFO:tensorflow:global_step/sec: 474.795
INFO:tensorflow:loss = 0.017820425, step = 18050 (0.211 sec)
INFO:tensorflow:global_step/sec: 404.255
INFO:tensorflow:loss = 0.017791638, step = 18150 (0.247 sec)
INFO:tensorflow:global_step/sec: 418.118
INFO:tensorflow:loss = 0.015657041, step = 18250 (0.239 sec)
INFO:tensorflow:global_step/sec: 501.857
INFO:tensorflow:loss = 0.016358878, step = 18350 (0.199 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 18425...
INFO:tensorflow:Saving checkpoints for 18425 into model_path2/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 18425...
INFO:tensorflow:Loss for final step: 0.01584984.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2020-11-28T00:38:30Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_path2/model.ckpt-18425
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [3300/33000]
INFO:tensorflow:Evaluation [6600/33000]
INFO:tensorflow:Evaluation [9900/33000]
INFO:tensorflow:Evaluation [13200/33000]
INFO:tensorflow:Evaluation [16500/33000]
INFO:tensorflow:Evaluation [19800/33000]
INFO:tensorflow:Evaluation [23100/33000]
INFO:tensorflow:Evaluation [26400/33000]
INFO:tensorflow:Evaluation [29700/33000]
INFO:tensorflow:Evaluation [33000/33000]
INFO:tensorflow:Inference Time : 10.06911s
INFO:tensorflow:Finished evaluation at 2020-11-28-00:38:40
INFO:tensorflow:Saving dict for global step 18425: global_step = 18425, loss = 0.017187782
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 18425: model_path2/model.ckpt-18425
Fianl evaluation result: {'loss': 0.017187782, 'global_step': 18425}
estimator = train(x_train, y_train, config_hidden_3)
test(estimator, x_test, y_test, config_hidden_3)

config_hidden_3 설정에 따라 학습하고 평가한다.

Final evaluation result: {'loss': 0.00090525486, 'global_step': 16750}

Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_6 (Dense)              (None, 99)                396       
_________________________________________________________________
dense_7 (Dense)              (None, 33)                3300      
_________________________________________________________________
dense_8 (Dense)              (None, 11)                374       
_________________________________________________________________
dense_9 (Dense)              (None, 1)                 12        
=================================================================
Total params: 4,082
Trainable params: 4,082
Non-trainable params: 0
_________________________________________________________________
None
INFO:tensorflow:Using default config.
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using config: {'_model_dir': 'model_path3', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_path3/model.ckpt-15075
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 15075...
INFO:tensorflow:Saving checkpoints for 15075 into model_path3/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 15075...
INFO:tensorflow:loss = 0.00097537064, step = 15075
INFO:tensorflow:global_step/sec: 331.988
INFO:tensorflow:loss = 0.0009289202, step = 15175 (0.302 sec)
INFO:tensorflow:global_step/sec: 416.984
INFO:tensorflow:loss = 0.0015710185, step = 15275 (0.240 sec)
INFO:tensorflow:global_step/sec: 415.075
INFO:tensorflow:loss = 0.002506471, step = 15375 (0.241 sec)
INFO:tensorflow:global_step/sec: 419.358
INFO:tensorflow:loss = 0.0020511155, step = 15475 (0.238 sec)
INFO:tensorflow:global_step/sec: 414.322
INFO:tensorflow:loss = 0.0012402382, step = 15575 (0.241 sec)
INFO:tensorflow:global_step/sec: 423.701
INFO:tensorflow:loss = 0.0012041066, step = 15675 (0.236 sec)
INFO:tensorflow:global_step/sec: 409.421
INFO:tensorflow:loss = 0.0009055139, step = 15775 (0.244 sec)
INFO:tensorflow:global_step/sec: 410.443
INFO:tensorflow:loss = 0.0012994356, step = 15875 (0.243 sec)
INFO:tensorflow:global_step/sec: 409.508
INFO:tensorflow:loss = 0.0018289988, step = 15975 (0.244 sec)
INFO:tensorflow:global_step/sec: 418.377
INFO:tensorflow:loss = 0.0010997656, step = 16075 (0.239 sec)
INFO:tensorflow:global_step/sec: 419.463
INFO:tensorflow:loss = 0.00094131374, step = 16175 (0.238 sec)
INFO:tensorflow:global_step/sec: 406.04
INFO:tensorflow:loss = 0.0011029835, step = 16275 (0.246 sec)
INFO:tensorflow:global_step/sec: 381.449
INFO:tensorflow:loss = 0.00094743, step = 16375 (0.262 sec)
INFO:tensorflow:global_step/sec: 325.478
INFO:tensorflow:loss = 0.0016332661, step = 16475 (0.307 sec)
INFO:tensorflow:global_step/sec: 338.961
INFO:tensorflow:loss = 0.0009722224, step = 16575 (0.295 sec)
INFO:tensorflow:global_step/sec: 327.142
INFO:tensorflow:loss = 0.001341069, step = 16675 (0.306 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 16750...
INFO:tensorflow:Saving checkpoints for 16750 into model_path3/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 16750...
INFO:tensorflow:Loss for final step: 0.00093256787.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2020-11-28T00:38:46Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_path3/model.ckpt-16750
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [3300/33000]
INFO:tensorflow:Evaluation [6600/33000]
INFO:tensorflow:Evaluation [9900/33000]
INFO:tensorflow:Evaluation [13200/33000]
INFO:tensorflow:Evaluation [16500/33000]
INFO:tensorflow:Evaluation [19800/33000]
INFO:tensorflow:Evaluation [23100/33000]
INFO:tensorflow:Evaluation [26400/33000]
INFO:tensorflow:Evaluation [29700/33000]
INFO:tensorflow:Evaluation [33000/33000]
INFO:tensorflow:Inference Time : 10.08492s
INFO:tensorflow:Finished evaluation at 2020-11-28-00:38:56
INFO:tensorflow:Saving dict for global step 16750: global_step = 16750, loss = 0.00090525486
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 16750: model_path3/model.ckpt-16750
Fianl evaluation result: {'loss': 0.00090525486, 'global_step': 16750}
estimator = train(x_train, y_train, config_hidden_4)
test(estimator, x_test, y_test, config_hidden_4)

config_hidden_4 설정에 따라 학습하고 평가한다.

Final evaluation result: {'loss': 0.0003716136, 'global_step': 16750}

Model: "sequential_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_10 (Dense)             (None, 297)               1188      
_________________________________________________________________
dense_11 (Dense)             (None, 99)                29502     
_________________________________________________________________
dense_12 (Dense)             (None, 33)                3300      
_________________________________________________________________
dense_13 (Dense)             (None, 11)                374       
_________________________________________________________________
dense_14 (Dense)             (None, 1)                 12        
=================================================================
Total params: 34,376
Trainable params: 34,376
Non-trainable params: 0
_________________________________________________________________
None
INFO:tensorflow:Using default config.
INFO:tensorflow:Using the Keras model provided.
INFO:tensorflow:Using config: {'_model_dir': 'model_path4', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_path4/model.ckpt-15075
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 15075...
INFO:tensorflow:Saving checkpoints for 15075 into model_path4/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 15075...
INFO:tensorflow:loss = 0.000880421, step = 15075
INFO:tensorflow:global_step/sec: 193.322
INFO:tensorflow:loss = 0.00041869216, step = 15175 (0.518 sec)
INFO:tensorflow:global_step/sec: 228.483
INFO:tensorflow:loss = 0.00087714195, step = 15275 (0.438 sec)
INFO:tensorflow:global_step/sec: 225.498
INFO:tensorflow:loss = 0.0036145335, step = 15375 (0.443 sec)
INFO:tensorflow:global_step/sec: 250.87
INFO:tensorflow:loss = 0.004145555, step = 15475 (0.399 sec)
INFO:tensorflow:global_step/sec: 250.533
INFO:tensorflow:loss = 0.003131874, step = 15575 (0.399 sec)
INFO:tensorflow:global_step/sec: 236.734
INFO:tensorflow:loss = 0.00051716634, step = 15675 (0.422 sec)
INFO:tensorflow:global_step/sec: 243.268
INFO:tensorflow:loss = 0.0017692904, step = 15775 (0.411 sec)
INFO:tensorflow:global_step/sec: 238.103
INFO:tensorflow:loss = 0.0063250856, step = 15875 (0.420 sec)
INFO:tensorflow:global_step/sec: 234.148
INFO:tensorflow:loss = 0.0022252058, step = 15975 (0.427 sec)
INFO:tensorflow:global_step/sec: 233.761
INFO:tensorflow:loss = 0.0003358841, step = 16075 (0.428 sec)
INFO:tensorflow:global_step/sec: 243.844
INFO:tensorflow:loss = 0.0014498284, step = 16175 (0.410 sec)
INFO:tensorflow:global_step/sec: 236.112
INFO:tensorflow:loss = 0.019312123, step = 16275 (0.423 sec)
INFO:tensorflow:global_step/sec: 235.374
INFO:tensorflow:loss = 0.0015089135, step = 16375 (0.425 sec)
INFO:tensorflow:global_step/sec: 240.79
INFO:tensorflow:loss = 0.0065686377, step = 16475 (0.415 sec)
INFO:tensorflow:global_step/sec: 242.552
INFO:tensorflow:loss = 0.0005172259, step = 16575 (0.412 sec)
INFO:tensorflow:global_step/sec: 234.081
INFO:tensorflow:loss = 0.0006902584, step = 16675 (0.427 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 16750...
INFO:tensorflow:Saving checkpoints for 16750 into model_path4/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 16750...
INFO:tensorflow:Loss for final step: 0.00041096073.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2020-11-28T00:39:05Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from model_path4/model.ckpt-16750
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [3300/33000]
INFO:tensorflow:Evaluation [6600/33000]
INFO:tensorflow:Evaluation [9900/33000]
INFO:tensorflow:Evaluation [13200/33000]
INFO:tensorflow:Evaluation [16500/33000]
INFO:tensorflow:Evaluation [19800/33000]
INFO:tensorflow:Evaluation [23100/33000]
INFO:tensorflow:Evaluation [26400/33000]
INFO:tensorflow:Evaluation [29700/33000]
INFO:tensorflow:Evaluation [33000/33000]
INFO:tensorflow:Inference Time : 10.45715s
INFO:tensorflow:Finished evaluation at 2020-11-28-00:39:15
INFO:tensorflow:Saving dict for global step 16750: global_step = 16750, loss = 0.0003716136
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 16750: model_path4/model.ckpt-16750
Fianl evaluation result: {'loss': 0.0003716136, 'global_step': 16750}