In collaboration with NYU, TII’s ARRC has launched RLtools, an innovative library that accelerates autonomous system training, reduces resource requirements, and enhances compatibility across simulations and real-world applications.
The Technology Innovation Institute (TII), a global scientific research centre and the applied research pillar of Abu Dhabi’s Advanced Technology Research Council (ATRC), announced that its Autonomous Robotics Research Center (ARRC) had released RLtools.
Developed in collaboration with New York University’s Agile Robotics and Perception Lab as part of a joint project, ‘Learning to Fly in Seconds’, this open-source reinforcement learning library addresses critical training challenges in autonomous systems.
The two entities will work to create a versatile and adaptable framework to incrementally expand the library’s suite of algorithms, enhancing accuracy and extending compatibility across various platforms – both simulations and real-world applications.
This achievement marks the first instance of training high-speed, end-to-end drone controllers on a standard commercial-grade computer.
RLtools has successfully developed various breakthrough solutions to counter these hurdles, resulting in an impressive 75x speed-up compared to popular libraries, drastically reducing training time and resource requirements. The library also excels in resource efficiency, allowing training on consumer-grade laptops or directly on the microcontroller, a small computing device capable of operating machine learning models.
Speaking on this launch, Dario Albani, Senior Director—Autonomous Robotics Research Center, TII, said, “Through our dynamic synergy with NYU, the introduction and open sourcing of our RLtools library will catalyse continuous control and unprecedented progress in reinforcement learning. By slashing training times and providing a flexible framework, RLtools promises faster and more impactful advancements in autonomous intelligence.”
Historically, researchers and engineers have encountered roadblocks in their efforts to integrate autonomous systems seamlessly into real-world scenarios. These challenges include the excessive demand for computational power and time training AI models, reliance on sophisticated computational resources, the persistent gap between simulated and real-world environments, and compatibility issues with standard deep learning frameworks. These challenges have hindered the effective deployment of autonomous systems until now.
Professor Giuseppe Loianno, Assistant Professor, Director of the Agile Robotics and Perception Lab, NYU, said, “RLtools is a crucial advancement in establishing the next generation of practical, resource-efficient, and adaptable autonomous systems. As our research efforts with ARRC continue, RLtools seeks to deliver more exciting solutions, making reinforcement learning a crucial aspect of future-proof intelligent machines.”
In real-time performance, RLtools-trained controllers match or surpass state-of-the-art controllers used on drones worldwide. Furthermore, the library addresses deployment challenges and is directly implemented on microcontrollers, demonstrating the first-ever training of a deep reinforcement learning algorithm on a microcontroller.
The goal is to develop a singular, all-encompassing controller capable of autonomous operations and real-time learning, adjusting its parameters dynamically based on prevailing conditions. This approach will establish a unified and resilient system, ready to navigate diverse environments with maximum precision and efficiency.