Meta (formerly Facebook) announced PyTorch Live, a set of tools designed to make AI-powered experiences for mobile devices easier.
PyTorch, which Meta publicly released in January 2017, is an open-source machine learning library based on Torch, a scientific computing framework and script language that is in turn based on the Lua programming language. While TensorFlow has been around slightly longer (since November 2015), PyTorch continues to see a rapid uptake in the data science and developer community.
It claimed one of the top spots for fast-growing open-source projects last year, and Meta recently revealed that in 2019 the number of contributors on the platform grew more than 50 per cent year-over-year to nearly 1,200.
Also Read: Internet of Things and Challenges
PyTorch Live builds on PyTorch Mobile, a runtime that allows developers to go from training a model to deploying it while staying within the PyTorch ecosystem and the React Native library for creating visual user interfaces. PyTorch Mobile powers the on-device inference for PyTorch Live.
PyTorch Mobile can launch with its runtime and be created assuming that anything a developer wants to do on a mobile or edge device, the developer might also want to do on a server.
PyTorch Live ships with a command-line interface (CLI) and a data processing API. The CLI enables developers to set up a mobile development environment and bootstrap mobile app projects. As for the data processing, API prepares and integrates custom models with the PyTorch Live API, which can then be built into mobile AI-powered apps for Android and iOS.
In the future, Meta plans to enable the community to discover and share PyTorch models and demos through PyTorch Live and provide a more customisable data processing API and support machine learning domains that work with audio and video data.