Open AI just chose PyTorch over Tensorflow

Open AI just chose PyTorch over Tensorflow
Open AI just chose PyTorch over Tensorflow OpenAI standardised its deep learning framework on PyTorch

In a massive move, Elon Musk co-founded OpenAI, standardised their primary framework for development as PyTorch. The announcement came in January 30, 2020 at companies blog post which emphasis on the fact that, the move will provide the team with a trouble-free path to create and share optimised implementations of machine learning models internally.

Open AI is research organisation focused on discovering and enacting the path to safe artificial general intelligence, and it was founded by Elon Musk, Sam Altman, Ilya Sutskever and Greg Brockman on December 11, 2015. Open AI is currently based in San Francisco, California. Open AI’s mission is to ensure that artificial general intelligence benefits all of humanity and it tries to empower as many humans as possible with the power of AI so that not one or a group of individuals will have AI superpower. In the second half of 2019, Microsoft invested one billion US dollars in Open AI, which makes Open AI a dominant figure in research of safe artificial general intelligence.

In its blog post, Open AI states that

The main reason we chose PyTorch is to increase our research productivity at scale on GPUs.

Developed by Facebook’s AI Research lab, PyTorch is an open source machine learning library based on the Torch library which was initially released on October 2016 to public. PyTorch is specifically designed to accelerate the path from research prototyping to product development. Even Tesla is using PyTorch to develop full self-driving capabilities for its vehicles, including AutoPilot and Smart Summon.

Open AI also states that

It is very easy to try and execute new research ideas in PyTorch; for example, switching to PyTorch decreased our iteration time on research ideas in generative modeling from weeks to days.

Adaption of PyTorch by OpenAI gives Facebook a considerable upper hand over Tensorflow which is developed by Google’s Brain team. Tensorflow relies on static graph concept of the computational model which users has to define first. This means that the model has to go through the compilation phase to generate the computational graph. However there are some substantial changes introduces as part of TensorFlow 2.0 which makes things more dynamic.

PyTorch comes with dynamic computational graph from the word go, which makes execution of relatively small parts of code more easier. Users don’t have to wait for long compilation process to build and run their models, which means computations can be run almost instantaneously. This will be effective for rapid prototyping and analysing research ideas with quick feedback. This is the main motive behind the move along with effective model sharing across teams that uses a standardised framework.

The team will also be using frameworks like Tensorflow at OpenAI if it suits for a particular use-case as they states that

Going forward we’ll primarily use PyTorch as our deep learning framework but sometimes use other ones when there’s a specific technical reason to do so.

OpenAI, as per their blog, is in the process of writing PyTorch bindings for our highly-optimized blocksparse kernels, and will open-source those bindings in upcoming months. Also they released a version of Spinning Up in Deep RL, which is an educational resource produced by OpenAI that makes it easier to learn about deep reinforcement learning, using PyTorch at GitHub.

Read the full blog from OpenAI here.

Thanks for you time!

Suggest:

Learn Python in 12 Hours | Python Tutorial For Beginners

Complete Python Tutorial for Beginners (2019)

Python Tutorials for Beginners - Learn Python Online

Python Programming Tutorial | Full Python Course for Beginners 2019

Python Tutorial for Beginners [Full Course] 2019

TensorFlow.js Bringing Machine Learning to the Web and Beyond