I find myself trying to convert constructor parameters to their right types very often in my Python programs. So far I've been using code similar to this, so I don't have to repeat the exception arguments:
class ClassWithThreads(object):
def __init__(self, num_threads):
try:
self.num_threads= int(num_threads)
if self.num_threads <= 0:
raise ValueError()
except ValueError:
raise ValueError("invalid thread count")
Is this a good practice? Should I just don't bother catching any exceptions on conversion and let them propagate to the caller, with the possible disadvantage of having less meaningful and consistent error messages?
There are a number of ways that an __init__ method may be be called more than once. There may be more than one explicit call to the method in the hierarchy of __init__ methods. A class using multiple inheritance directly calls the __init__ methods of its base types.
In the python method and constructor, overloading is not possible.
The special method __init__ is the Python constructor. With an understanding of object oriented programming and classes, let's now look at how the __init__ method works within a Python program.
When I have a question like this, I go hunting in the standard library for code that I can model my code after. multiprocessing/pool.py has a class somewhat close to yours:
class Pool(object):
def __init__(self, processes=None, initializer=None, initargs=(),
maxtasksperchild=None):
...
if processes is None:
try:
processes = cpu_count()
except NotImplementedError:
processes = 1
if processes < 1:
raise ValueError("Number of processes must be at least 1")
if initializer is not None and not hasattr(initializer, '__call__'):
raise TypeError('initializer must be a callable')
Notice that it does not say
processes = int(processes)
It just assumes you sent it an integer, not a float or a string, or whatever. It should be pretty obvious, but if you feel it is not, I think it suffices to just document it.
It does raise ValueError
if processes < 1
, and it does check that initializer
, when given, is callable.
So, if we take multiprocessing.Pool
as a model, your class should look like this:
class ClassWithThreads(object):
def __init__(self, num_threads):
self.num_threads = num_threads
if self.num_threads < 1:
raise ValueError('Number of threads must be at least 1')
Wouldn't this approach possibly fail very unpredictably for some conditions?
I think preemptive type checking generally goes against the grain of Python's (dynamic-, duck-typing) design philosophy.
Duck typing gives Python programmers opportunities for great expressive power, and rapid code development but (some might say) is dangerous because it makes no attempt to catch type errors.
Some argue that logical errors are far more serious and frequent than type errors. You need unit tests to catch those more serious errors. So even if you do do preemptive type checking, it does not add much protection.
This debate lies in the realm of opinions, not facts, so it is not a resolvable argument. On which side of the fence you sit may depend on your experience, your judgment on the likelihood of type errors. It may be biased by what languages you already know. It may depend on your problem domain.
You just have to decide for yourself.
PS. In a statically typed language, the type checks can be done at compile-time, thus not impeding the speed of the program. In Python, the type checks have to occur at run-time. This will slow the program down a bit, and maybe a lot if the checking occurs in a loop. As the program grows, so will the number of type checks. And unfortunately, many of those checks may be redundant. So if you really believe you need type checking, you probably should be using a statically-typed language.
PPS. There are decorators for type checking for (Python 2) and (Python 3). This would separate the type checking code from the rest of the function, and allow you to more easily turn off type checking in the future if you so choose.
You could use a type checking decorator like this activestate recipe or this other one for python 3. They allow you to write code something like this:
@require("x", int, float)
@require("y", float)
def foo(x, y):
return x+y
that will raise an exception if the arguments are not of the required type. You could easily extend the decorators to check that the arguments have valid values aswell.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With