There was official(?) recommendation of running an IPython Notebook server, and creating a profile via
$ ipython profile create nbserver
as recommended in http://ipython.org/ipython-doc/1/interactive/public_server.html. This allowed for very different and very useful behavior when starting an IPython Notebook via ipython notebook
and ipython notebook --profile=nbserver
.
With Jupyter 4.0, there's a change and there are no longer profiles. I've found the conversation https://gitter.im/ipython/ipython/archives/2015/05/29 which has user minrk saying:
The .ipython directory has several things in it:
multiple config directories (called profiles)
one 'data' directory, containing things like kernelspecs, nbextensions
runtime info scattered throughout, but mostly in profiles
Jupyter follows more platform-appropriate conventions:
one config dir at JUPYTER_CONFIG_DIR, default: .jupyter
one data dir at JUPYTER_DATA_DIR, default: platform-specific
one runtime dir at JUPYTER_RUNTIME_DIR, default: platform-specific
And a rather cryptic remark:
If you want to use different config, specify a different config directory with JUPYTER_CONFIG_DIR=whatever
What's the best way to get different behavior (say, between when running as a server vs normal usage)?
Will it involve running something like:
$ export JUPYTER_CONFIG_DIR=~/.jupyter-nbserver
$ jupyter notebook
whenever a server 'profile' needs to be run? and
$ export JUPYTER_CONFIG_DIR=~/.jupyter
$ jupyter notebook
whenever a 'normal' profile needs to run? Because that seems terrible. What's the best way to do this in Jupyter 4.0?
Jupyter allows a few magic commands that are great for timing and profiling a line of code or a block of code. Let us take a look at a really simple example with these functions: Now to see which one of these is faster, you can use the %timeit magic command: Here the -n 3 denotes the number of loops to execute.
Python's profiler creates a detailed report of the execution time of our code, function by function. Here, we can observe the number of calls of the functions histogram (), cumsum (), step (), sort (), and rand (), and the total time spent in those functions during the code's execution. Internal functions are also profiled.
How to do it... IPython offers the %prun line magic and the %%prun cell magic to easily profile one or multiple lines of code. The %run magic command also accepts a -p flag to run a Python script under the control of the profiler. These commands accept a lot of options as can be seen with %prun? and %run?.
To use line_profiler, normally you'd need to modify your code and decorate the functions you want to profile with @profile. @profile def slow_function (a, b, c): ... However one trick that I find especially useful is to use the line_profiler extension within Jupyter.
Using some code from this blog post http://www.svds.com/jupyter-notebook-best-practices-for-data-science/ and updating it. The easiest solution appears to be to create an alias, like:
alias jupyter-nbserver='JUPYTER_CONFIG_DIR=~/.jupyter-nbserver jupyter notebook'
So now you can run the jupyter notebook with a different config via the simple command jupyter-nbserver
.
A more robust solution might involve creating a bash function that changes the environment variable, checks whether there's a config file, if not creating one, then executing, but that's probably overkill. The answer that I give on this related question https://stackoverflow.com/a/32516200/246856 goes into creating the initial config files for a new 'profile'.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With