I believe Python is a superior workbench in my field. I do a lot of scraping, data wrangling, large data work, network analysis, Bayesian modeling, and simulations. All of these things typically need speed and flexibility so I find Python to work better than R in these cases. Here are a few things about Python that I like (some are mentioned above, other points are not):
-Cleaner syntax; more readable code. I believe Python to be a more modern and syntactically consistent language.
-Python has Notebook, Ipython, and other amazing tools for code sharing, collaboration, publishing.
-iPython's notebook enables one to use R in one's Python code so it is always possible to go back to R.
-Substantially faster without recourse to C. Using Cython, NUMBA, and other methods of C integration will put your code to speeds comparable to pure C. This, as far as I am aware, cannot be achieved in R.
-Pandas, Numpy, and Scipy blow standard R out of the water. Yes, there are a few
things that R can do in a single line but takes Pandas 3 or 4. In general, however, Pandas can handle larger data sets, is easier to use, and provides incredible flexibility in regard to integration with other Python packages and methods.
-Python is more stable. Try loading a 2gig dataset into RStudio.
-One neat package that doesn't seem mentioned above is PyMC3 - great general package for most of your Bayesian modeling.
-Some, above mention ggplot2 and grub about its absence from Python. If you ever used Matlab's graphing functionalities and/or used matplotlib in Python then you'll know that the latter options are generally much more capable than ggplot2.
However, perhaps R is easier to learn and I do frequently use it in cases where I am not yet too familiar with the modeling procedures. In that case, the depth of R's off-the-shelf statistical libraries is unbeatable. Ideally, I would know both well enough to be able to use upon need.