Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cross platform interface for virtualenv

I have developed my entire project (Django, Python) on Windows and all the PaaS out there use Linux.

VirtualEnv on Linux:

VirtualEnv_dir /
                 bin/ activate, activate_this.py
                 include /
                 lib /
                 local /

VirtualEnv of Windows:

VitualEnv_dir /
                Include/
                Lib /
                Scripts/ activate.bat, activate_this.py

As virtualenv is a lot different in windows & Linux. How shall I need to use my windows virtualenv on the PaaS?

Edit:

If I am on windows, I need to run call virtualenv_dir/scripts/activate.bat to get into it. Where as in Linux, its something source virtualenv_dir/bin/activate.

Now, my repo holds a virtualenv generated using Windows (which uses .bat). When I push the repo to a Linux system, how should I be able to run that? (bat files would not work!)

I am using OpenShift PaaS where I would like to put a virtualenv on Git repo. How can I activate it?

What's the best solution

like image 726
Surya Avatar asked Aug 20 '12 07:08

Surya


1 Answers

Unless you use some Windows specific libraries; or an alternate Python implementation (like IronPython), there is nothing to worry about.

Many people (including myself) use Windows for development and deploy on Linux for production and use virtualenv for this purpose. It is designed to make your environment portable.

You don't push the entire virtualenv to Linux.

Once you have your virtual environment ready and your code is working, you should freeze the requirements for your application:

pip freeze > requirements.txt

In your target operating system; create an empty virtual environment:

virtualenv --no-site-packages prod_env

In recent versions of virtualenv, --no-site-packages is the default.

Next, populate the environment with your requirements file from development:

source prod_env/bin/activate
pip install -r requirements.txt

When you have a requirements change, simply regenerate the requirements.txt file and run pip install -r requirements.txt in production.

In some situations, your production systems don't have access to the Internet to download packages so the pip install trick doesn't work. For these scenarios you can create your own private pypi server and push your packages there. Added bonus going via this route is that you can create and push private packages and install them using the normal setuptools utilities.

Once you have decided on which process works for you - you then automate it in your deployment scripts; generally with hooks into your source code management system. Some people prefer a separate release engineering process (with a release manager - that is a person, not a program).

like image 136
Burhan Khalid Avatar answered Oct 16 '22 08:10

Burhan Khalid