Training with spot or preemptible instances is significantly cheaper, but there is a small risk that your job could be
preempted. With Keras, use
eml.callbacks.preempted_callback('path/to/checkpoint') to automatically save a checkpoint
before your job shuts down if preemption occurs. If you are using the
prefer option for
then you can use
preempted_callback to save your progress and resume from where you left off when your job is restarted.
import os import engineml.keras as eml callbacks = eml.callbacks.preempted_callback(os.path.join(eml.data.output_dir(), 'preempted.hdf5')) # Train model model.fit(..., callbacks=callbacks)