![]() ![]() Update 02/Nov/2020: fixed issue with file name in Step 2. Finally, we demonstrate how a Keras model can actually be trained in the cloud. After the introduction, we will show how TensorFlow Cloud can be installed and linked to your Keras model. We'll introduce it by looking at the TensorFlow Cloud API and especially at the cloud strategies that can be employed, allowing you to train your model in a distributed way. The real work, here, is TensorFlow cloud itself. This altogether gives us the context we need for getting towards the real work. ![]() Subsequently, we introduce the Google Cloud AI Platform, with which TensorFlow Cloud connects for training your models. We then also argue for why cloud services can help you reduce the cost without losing the benefits of such heavy machinery. Firstly, we'll be looking at the need for cloud-based training, by showing the need for training with heavy equipment as well as the cost of getting such a device. ![]() In this article, we'll be exploring TensorFlow Cloud in more detail. What's more, if desired, TensorFlow Cloud supports parallelism - meaning that you can use multiple machines for training, all at once! While training a model in the cloud was not difficult before, doing so distributed was. This is great, because a training job can even be started from your own machine. By simply connecting to the Google Cloud Platform, with a few lines of code, it allows you to train your Keras models in the cloud. Traditionally, training a model in the cloud hasn't been stupidly easy. In many cases, the limited costs of this approach (especially compared to the cost of owning and maintaining a heavy-resource machine) really makes training your models off-premises worthwhile. For example, they allow you to train your model with a few heavy machines, while you simply turn them off after you've finished training your model. By means of various service offerings, many cloud vendors - think Amazon Web Services, Microsoft Azure and Google Cloud Platform - have pooled together resources that can be used and paid for as you use them. In those cases, cloud platforms come to the rescue. That's why it's sometimes not worthwhile to train your model on a machine that is running on-premise: the cost of buying and maintaining such a machine doesn't outweigh the benefits. With ever-growing datasets and models that continuously become deeper and sometimes wider, the computational cost of getting a well-performing model increases day after day. Training a supervised machine learning model does often require a significant amount of resources. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |