Amazon SageMaker announces new features to the built-in Object2Vec algorithm


The Object2Vec algorithm now automatically samples training data that are unlikely to be observed and labels them as negative. As a result, you can eliminate the need to manually implement negative sampling as part of data pre-processing, a significant time savings.

In addition, the Object2Vec algorithm now supports a new sparse gradient that speeds up single-GPU training up to 2 times without loss in performance. Additionally, the training speed can be accelerated up to 20 times using multiple GPUs.

The Object2Vec algorithm has two encoders that encode data from two input sources. Now you can jointly train data with both encoders which speeds up the training process. Also, customizations of the comparator operator are now supported providing flexibility to assemble the two encoding vectors into a single vector for use cases such as document embedding.

You can refer to the documentation for details, and also learn more in the blog post.

Source link

Related Posts