In my CI I have a build step, which uses pip
and is correctly set up using private indexes etc.
Then I have test step, which executes python setup.py test
.
In this particular case test
is an alias to pytest
.
Setuptools and PIP resolve package dependencies differently, which leads to test step attempting to re-install some of the packages. I would like to prevent that. Is there a way to do that either with Setuptools configuration (preferable) or with PyTest configuration?
Update:
By popular demand repos to exemplify the issue I'm having https://github.com/vartec/example_repo_setuptools_issue
Setup.py Build Environment Packages installed with setup.py have build requirements that developers must adhere to. However, some requirements are optional.
py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. I found a very detailed write-up explaining this issue: "Why you shouldn't invoke setup.py directly" (October 2021).
When pip runs setup.py, it will install all of the dependencies listed in install_requires. For example, if your project includes matplotlib, you'll need to list it, as well as its dependency of NumPy in install_requires as shown below: setup( ... install_requires=[ '<matplotlib>', '<numpy>' ] ...)
The setup.py file may be the most significant file that should be placed at the root of the Python project directory. It primarily serves two purposes: It includes choices and metadata about the program, such as the package name, version, author, license, minimal dependencies, entry points, data files, and so on.
So, as I understand, the problem is that setuptools installs the internal dependencies into the local folder (.eggs
if I remember correctly), not into the normal virtualenv that you use.
And you do not like how setuptools does that. Neither do I, especially when I need a local DevPI server to be used (setuptools ignores that). So I do this way:
setup(
...
install_requires=['monotonic'], # just for example
extras_require={
'test': ['pytest', 'pytest-timeout'],
'docs': ['sphinx', 'alabaster'],
},
)
When you need to test, assuming that you create and install virtualenv somehow:
pip install -e .[test]
Here, .
is the current directory. -e
mean editable mode (but can be omitted). [test]
is a setuptools "extra". You can declare multiple "extras", and instal them as pip install mylib[ext1,ext2]
if needed.
Then you can run the tests either way:
pytest
python setup.py test
The latter one is only if the test
command is already configured to run pytest (see pytest integration manual).
The trick is that if setuptools
can find the test-dependencies in the current environment (virtualenv, pyenv, system python, whatever else), it will not install them as the eggs, and will just use the installed versions.
Actually, you do not even need to declare tests_require=
in that case, as the libraries are assumed to be installed into the virtualenv. If they were not, the test command will just fail.
Similarly, you can pip install .[docs]
, and build your docs with sphinx-build ...
command from the current virtualenv.
Note that the install_requires
are installed always, no matter with extras you add. So the app/lib itself will be always fully functional, importable and introspectable.
Hopefully, this was the question (if I understood it correctly).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With