I have a function that, in the simplest of cases, operates on an iterable of items.
def foo(items):
for item in items:
# do stuff
Sometimes, I do not want to pass it an iterable of items directly, but rather an object that provides a method to get the iterable:
def foo(obj):
for item in obj.iteritems():
# do same stuff as above
I can merge these two cases like this:
def foo(obj):
try:
items = obj.iteritems()
except ValueError:
items = obj
for item in items:
# do stuff
This works just fine. Now I get a third use case that looks like this:
def foo(objs):
for item in itertools.chain.from_iterable(obj.iteritems() for obj in objs):
# do same stuff again
I can still use the try-except
approach, since the interfaces are incompatible. However, the nested try catch would start to become very ugly. More so when I want to add a fourth use-case. Is there any way to solve this without nesting the try
-blocks?
The reasonable amount of parameters is 5 Reaching 6 parameters is below average, so the common sense that set the bar around 3 or 4, and “for sure, nothing beyond 6”, can be read on the actual coding. Methods with 10 arguments or more appear in less that 20% of projects. That's still quite a lot.
The main function can be defined with no parameters or with two parameters (for passing command-line arguments to a program when it begins executing). The two parameters are referred to here as argc and argv, though any names can be used because they are local to the function in which they are declared.
The ideal number of arguments for a function is zero (niladic). Next comes one (monadic), followed closely by two (dyadic). Three arguments (triadic) should be avoided where possible. More than three (polyadic) requires very special justification - and then shouldn't be used anyway.
As it stands, you should probably use at least two methods here,the third just calling the first with the itertools.chain.from_iterable
result. You could also potentially have a use for *args; it depends on your precise case (providing a real example is helpful). You can also use a simple helper function to return the right type of iterator.
Perhaps this might work out:
def _foo_iter(obj):
try:
return obj.iteritems()
except AttributeError:
return obj
def foo(*objs):
for obj in objs:
for item in _foo_iter(obj):
Must agree with Chris: the magic understand-everything input is going to turn around and bite you. If you pass it an iterable of objects-with-iterables of iterables, how do you specify at what level to actually start processing data?
Far better to stick with "either a list or a generator" as input, then pre-process your calls to the function.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With