Amazon Releases Incremental Training For SageMaker JumpStart

Amazon-Released-Incremental-Training-Feature-in-SageMaker-JumpStart

Developers can incrementally retrain and fine-tune pre-trained models without requiring extra code

AWS recently released a new feature in SageMaker (AWS Machine Learning Service) JumpStart to incrementally retrain machine-learning (ML) models trained with expanded datasets. With this feature, developers could fine-tune their models for better performance in production with a couple of clicks.

In the new JumpStart feature, developers can incrementally retrain and fine-tune pre-trained models without requiring extra code. This capability in ML is known as transfer learning to fine-tune a general model for a business-specific problem with a new dataset. It increases the accuracy of the fine-tuned model and reduces the cost of the model training. JumpStart also includes popular ML algorithms based on LightGBM, CatBoost, XGBoost, and Scikit-learn that developers can train from scratch for tabular regression and classification.

This recent feature is among the series of efforts to add more automation to SageMaker JumpStart. JumpStart is a one-click feature that deploys end-to-end models for common business problems only by setting up some parameters and configurations. It has a collection of 300 models, like object detection, text classification, and text generation. These models are extracted from popular open-source hubs like TensorFlow, Pytorch, Hugging face, and MXNet. These features are available through Amazon SageMaker Studio (below picture), a friendly GUI to launch ML products and Amazon SageMaker SDK for better and easier embedding with production ML pipelines.

As part of this announcement, Amazon published sample code in Jupyter notebooks for the developers. These notebooks contain code examples on how to use SageMaker JumpStart incremental training for the different applications and domains.

Other cloud platform providers like Azure ML and Google ML API provide capabilities to use the pre-trained model in production but incremental training requires more coding by the developers. Open-source ML platforms like TensorFlow, Keras, Pytorch, and MXNet have the incremental training capability as part of their APIs for some ML applications like object detection and text analysis. Developers could use these APIs to do incremental training to fine-tune the models on their datasets.