Aug 15 2025
How PyTorch unlocks AI research and productization at scale
Read time: 4 mins

Takeaways

  • Since Meta first founded PyTorch in 2016, it has grown to become the leading deep learning framework for AI and ML.
  • PyTorch’s purpose has evolved from strictly AI research into a tool that stretches across research and productization, enabling developers to iterate faster and unlock future breakthroughs.
  • As new AI use cases emerge in areas such as ambient computing and wearables, PyTorch will act as a critical common ground for AI researchers and developers to share learnings and improve model performance.

When the Meta Fundamental AI Research (FAIR) team first created PyTorch in 2016, it was designed to support accelerated training and developer experiences for AI researchers. It has since grown to become the most popular deep learning framework for AI and ML, streamlining the path from research prototyping to production deployment.

Today, over half (63%) of training models are built on PyTorch and 70% of AI research implementations use the framework. The PyTorch Foundation even released a documentary last year about PyTorch’s history and role in advancing AI innovation. Adoption of PyTorch remains strong, with its tools ecosystem growing by 25% in 2024 to deliver new enhancements in hardware and software capabilities.

However, the journey to get here has been long and filled with complex technology hurdles. Joe Spisak and Natalia Gimelshein are two Meta teammates who have been part of the PyTorch journey since the very beginning, helping grow the framework into the foundational enabler of AI innovation that it is today.


Driving PyTorch adoption at scale

As director of product management for AI at Meta, Joe leads the strategy behind PyTorch. He remembers the early days of PyTorch as a busy but exciting time when Meta was focused on encouraging widespread adoption among developers so that PyTorch could expand beyond research into AI productization.

“Our goal was to bring PyTorch into every place possible, whether on the edge, on mobile devices or in the cloud,” explains Joe. “We especially wanted to make PyTorch accessible for developers. Whether they’re training small models or large models, or they’re deploying AI in their applications, they need hardware support. For that to happen, we had to give the community options so they could run on different types of platforms.”

Software engineer Natalia Gimelshein and product leader Joe Spisak walking and talking in a Meta office.

Natalia agrees with the need to make PyTorch flexible and modular. She is a PyTorch infrastructure software engineer who joined Meta to contribute to the release of PyTorch 2.0 with compile support.

“Many companies adopt PyTorch as their main vehicle for deep learning workflows because it allows them to quickly share code and write in Python without being constrained by framework-imposed limitations,” she shares. “It’s very difficult for an individual contributor to write an AI prototype without extensive support and compute resources. When we released PyTorch 2.0, we knew we needed to create a design that was compatible with users’ existing deep learning models but would still enable better performance to keep pace with increasing GPU speeds.”


Unlocking innovation in AI research and productization

The hard work from Joe, Natalia and their team paid off. Today, PyTorch runs more than 5 trillion inferences per day across 50 data centers and continues to be a core driver of AI research innovation — including here at Meta.

“PyTorch is instrumental in unlocking new breakthroughs in the AI space,” says Joe. “Our FAIR team uses PyTorch as their primary language for developing Gen AI, and the Llama team uses it as a foundational library and language to train and develop new models. At the same time, PyTorch underpins many of our products today — not only running in our data centers but also on the edge and even on our Ray-Ban Meta glasses. It acts as a lingua franca between AI research and product teams, allowing them to work collaboratively.”

For Joe, this deep integration between research and product is central to the promise of PyTorch. It also makes for a more engaging work experience.

“At Meta, innovation happens at every layer of the stack. We have data centers and system design, we develop our own silicon, we build compilers and PyTorch frameworks that can be used to train our own Llama models, we conduct fundamental research on PyTorch, and so much more. The vertical integration is astonishing!”

Ultimately, Joe believes that PyTorch will continue to straddle the line between research and productization, uniting the latest and greatest developments from each camp to unlock future AI breakthroughs.

“The future of AI will only continue to grow more diverse— whether it’s on devices, in ambient computing or on wearables. I believe PyTorch will act as a center of mass to bring the AI community together.”

Stay connected.