by Shagun Sodhani
Numpy is the de-facto choice for array-based operations while PyTorch largely used as a deep learning framework. At the core, both provide a powerful N-dimensional tensor. This talk would focus on the similarities and difference between the two and how we can use PyTorch to augment Numpy.
Numpy is the de-facto choice for performing array-based operations while PyTorch is largely used as 'a deep learning framework for fast, flexible experimentation'. Even though the two descriptions sound different, both the libraries provide access to a powerful N-dimensional array (or as we say in PyTorch - tensor). PyTorch supports tensor computations (similar to Numpy) with strong GPU acceleration. In some sense, PyTorch can be used as a replacement for Numpy to use the power of GPUs (even if your use-case is not a machine learning use case). The cost of converting a numpy ndarray to torch tensor is quite negligible as they share the same storage. Unfortunately, PyTorch cannot be used like a drop-in replacement for Numpy though PyTorchs is 'expect to get closer and closer to NumPy’s API where appropriate'
axiswhile PyTorch uses
dimand gotchas to look out for.
The slides and notebook are available here. I would use colab notebooks for demoing the code. Note that I am specifically avoiding the perspective of how to train neural networks using PyTorch and want to focus on the interplay between PyTorch and Numpy.
About the Author
Author website: https://shagunsodhani.in/