OpenAI is an artificial intelligence (AI) research laboratory consisting of for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc.Their goal is to advance digital intelligence, unconstrained by a need to generate a financial return.
AI has been a surprising field. In the early days, people thought that solving specific tasks like winning a chess game would lead to discovering human-level intelligence algorithms. However, the solution to each task turned out to be much more complicated than people thought. For the past few years, the world has held another flavour of surprise. ‘AI-powered techniques’ — they were there for decades, deep learning started achieving a state of the art results in a wide variety of problem domains.
This approach has yielded outstanding results on pattern recognition problems, such as recognising objects in images, machine translation, and speech recognition. But we have also started to see what it might be like for computers to be creative, to dream, and to experience the world.
The Growth Story
October 2015: Elon Musk, Sam Altman, and other investors announced the formation of OpenAI and pledged over $1 billion to the venture. The organisation stated they would “freely collaborate” with other institutions and researchers by making its patents and research open to the public.
April 2016: OpenAI released a public beta of ‘OpenAI Gym’, its platform for reinforcement learning research.
December 2016: OpenAI released Universe, a software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications.
February 2018: Elon Musk resigned his board seat, citing “a potential future conflict (of interest)” with Tesla AI development for self-driving cars, but remained a donor.
2019: OpenAI transitioned from non-profit to for-profit. The company distributed equity to its employees and partnered with Microsoft Corporation, which announced an investment package of $1 billion into the company. OpenAI then announced its intention to commercially licence its technologies, with Microsoft as its preferred partner.
2020: OpenAI is headquartered in San Francisco’s Mission District and shares the former Pioneer Trunk Factory building with Neuralink, another company co-founded by Elon Musk. Also, OpenAI announced GPT-3 (Powers the Next Generation of Apps), a language model trained on trillions of words from the Internet. It also announced that an associated API, named simply ‘the API’, would form the heart of its first commercial product. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text.
OpenAI is viewed as a significant competitor to DeepMind.
OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. It supports teaching agents everything from walking to playing games like Pong or Pinball. It aims to provide an easy to set up, general-intelligence benchmark with a wide range of different environments, somewhat similar to but broader than the ImageNet Large Scale Visual Recognition Challenge used in supervised learning research. It hopes to standardise how the environment is defined in AI research publications so that published research becomes more easily accessible and reproducible. The project claims to provide the user with a simple interface.
Also Read: Company Closeup: IBM – Networking The World
In ‘RoboSumo’, virtual humanoid meta-learning robots initially lack knowledge of how to even walk and given the goals of self-learning processes. The agents then learn how to adapt to changing conditions. When an agent is removed from this virtual environment and placed in a new virtual environment with high winds, the agent braces to remain upright, suggesting it has learned how to balance and walk in a generalised way.
In 2018, OpenAI launched a Debate Game that teaches machines to debate toy problems in front of a human judge. The primary purpose is to research whether such an approach may assist in auditing AI decisions and developing explainable AI.
Dactyl uses machine learning (ML) to train a robot Shadow Hand from scratch, using the same reinforcement learning algorithm techniques code that OpenAI Five uses. The robot hand is trained entirely in physically inaccurate simulation.
- GPT (Generative Pre-trained Transformer): Improving Language Understanding with Unsupervised Learning
GPT was launched to improve language understanding with unsupervised learning. A combination of transformers and unsupervised pre-training will provide the results of convincing examples that pairs supervised learning methods with unsupervised pre-training works well.
- GPT-2 (Generative Pre-trained Transformer-2): Better Language Models and Their Implications
GPT-2, a successor to GPT, is trained to predict the next word in 40GB of Internet text. It is a sizable transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. It is trained with a simple objective: predict the next word, given all the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.
- GPT-3 (Generative Pre-trained Transformer-3): Powers the Next Generation of Apps
GPT-3, a successor to GPT-2, is an autoregressive language model, which uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series. It can identify themes from natural language and generate summaries allows Viable to give product, customer experience and marketing teams at companies across industries a better understanding of their customers’ wants and needs. On September 23, 2020, GPT-3 was licenced exclusively to Microsoft.
OpenAI’s MuseNet is a deep neural net trained to predict subsequent musical notes in MIDI music files. It can also generate songs with 10 different instruments in 15 different styles.
DALL-E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions using a dataset of text-image pairs. It has a diverse set of capabilities, including creating anthropomorphised versions of animals and objects, combining unrelated concepts in plausible ways, rendering text and applying transformations to existing images.
OpenAI Microscope is a collection of visualisations of every significant layer and neuron of eight different neural network models, often studied in interpretability. A microscope was created for easy analysis of the features that form inside these neural networks. The models included are AlexNet, VGG 19, different versions of Inception, and different versions of CLIP Resnet.
OpenAI Five is the name of a team of five OpenAI-curated bots used in the competitive five-on-five video game Dota 2, who learn to play against human players at a high skill level through trial-and-error algorithms.
Gym Retro is a platform for reinforcement learning research on games. It is used to conduct research on RL algorithms and study generalisation. Gym Retro gives the ability to generalise between games with similar concepts but different appearances.
OpenAI’s research director is Ilya Sutskever, one of the world experts in machine learning. The CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.
Strategy For Success
Musk asked: “What is the best thing we can do to ensure the future is good? We could sit on the sidelines, or we can encourage oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity.” He acknowledges that “there is always some risk that in actually trying to advance AI we may create the thing we are concerned about.”
Musk and Altman’s counter-intuitive strategy of reducing the risk that AI will cause overall harm by giving AI to everyone is controversial among those concerned with AI’s existential risk. Conversely, OpenAI’s initial decision to withhold GPT-2 due to a wish to ‘err on the side of caution in the presence of potential misuse, has been criticised by advocates of openness.
According to OpenAI, the capped-profit model adopted in March 2019 allows OpenAI LP to attract investment from venture funds legally, and also, to grant employees stakes in the company. The goal is that they can say “I’m going to Open AI, but in the long term it’s not going to be disadvantageous to us as a family.”