Choosing not to guard against TypeError

We tend not to protect against TypeErrors in Python. To do so runs against the grain of dynamic typing in Python and limits the re-use potential of code we write.

For example, we could test whether the argument was an str using the built-in isinstance() function and raise a TypeError exception if it was not:

def convert(s):
"""Convert a string to an integer."""
if not isinstance(s, str):
raise TypeError("Argument must be a string".)

try:
return int(s)
except (ValueError, TypeError) as e:
print("Conversion error: {}".format(str(e)), file=sys.stderr)
raise

But then we'd also want to allow arguments that are instances of float as well. It soon gets complicated if we want to check whether our function will work with types such as rational, complex, or any other kind of number, and in any case, who is to say that it does?!

Alternatively we could intercept TypeError inside our sqrt() function and re-raise it, but to what end?

Figure 6.3: Just let it fail

Usually in Python it's not worth adding type checking to your functions. If a function works with a particular type – even one you couldn't have known about when you designed the function – then that's all to the good. If not, execution will probably result in a TypeError anyway. Likewise, we tend not to catch TypeErrors with except blocks very frequently.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset