I have a python virtual environment declared in my workspace settings, everything is fine with it.
Now I have a build task that calls a make
target which in turn calls a pip package. When I run it, it doesn't use my venv
even though it is the selected interpreter.
I've tried to add a activate venv/...
before calling my make
command but I get a Permission denied
error in the embedded terminal.
How can I use a virtual environment and tasks at the same time?
It might be a little late to answer your question but the trick is to set the command field to point to the virtual environment's python executable instead of the default python
. If you set up your .vscode
's settings.json
correctly you should have something like this in your file:
{
"python.pythonPath": "env\\Scripts\\python.exe",
// other settings ...
}
Having this config in your workspace, you can then create a custom task of type process
using ${config:python.pythonPath}
as it's command field.
You can read all about it in this section of VSCode's docs.
This example creates a Django python manage.py migrate
task:
{
"version": "2.0.0",
"tasks": [
{
"label": "Migrate",
"type": "process",
"command": "${config:python.pythonPath}",
"args": [
"${workspaceFolder}/src/manage.py",
"migrate"
],
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared",
"showReuseMessage": true,
"clear": true
},
"problemMatcher": []
}
]
}
I tested with the following tasks.json and it works:
{
"version": "2.0.0",
"tasks": [
{
"label": "test",
"type": "shell",
"command": "source /home/me/.pyenvs/myenv/bin/activate; make"
}
]
}
First activate virtual environment and then executes make.
I had quite a lot of trouble with this recently myself. I wanted to update package requirements before running code, but I didn't want to implement a complex or maintenance-heavy solution. The following works well for me across Windows, OSX, and Linux environments.
{
"version": "2.0.0",
"tasks": [
{
"label": "pipInstall",
"type": "shell",
"command": "${command:python.interpreterPath}",
"args": ["-m", "pip", "install", "-r", "requirements.txt"]
}
]
}
I believe ${command:python.interpreterPath}
points to whatever interpreter path one selects while setting up the environment.
I'm late to the party, but this alternative might be useful. If you use pipenv in stead of standard venv, you can use pipenv run
. It will activate the virtualenv before running the process. For example, this works for building sphinx:
{
"version": "2.0.0",
"tasks": [
{
"label": "build html",
"type": "process",
"command": "pipenv",
"args": [
"run",
"sphinx-build",
"-b",
"html",
"${workspaceFolder}",
"${workspaceFolder}/_build/html"
],
"group": {
"kind": "build",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new"
}
}
]
}
You can modify the shell in tasks.json
in a way that works for bash
and cmd
shells. This worked for me and it correctly prints the full path to the python executable in the virutal environment.
{
"version": "2.0.0",
"linux": {
"options": {
"shell": {
"executable": "bash",
"args": [
"--init-file",
"env/bin/activate",
"-ci",
]
}
},
},
"windows": {
"options": {
"shell": {
"executable": "cmd.exe",
"args": [
"/C",
"env\\Scripts\\activate.bat",
"&"
]
}
}
},
"tasks": [
{
"label": "test shell with virtualenv",
"command": "which python", // where on Windows for testing
"type": "shell",
"group": "build",
"problemMatcher": []
}
]
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With