Be as Vulnerable as Possible to Internal Errors
I've been reading ZeroMQ: Messaging For Many Applications. Even if you don't care about ZeroMQ itself, you should read this book. It has some of the most personality that I've ever seen in a technical book, filled with challenging ideas presented in a humorous way.
I ran across a great quote regarding error handling.
ØMQ’s error handling philosophy is a mix of fail fast and resilience. Processes, we believe, should be as vulnerable as possible to internal errors, and as robust as possible against external attacks and errors. To give an analogy, a living cell will self-destruct if it detects a single internal error, yet it will resist attack from the outside by all means possible.
"Processes … should be as vulnerable as possible to internal errors."
This is an easy trap to fall into. Trying to make code safe from itself usually leads to a debugging nightmare, and often times fails in new and spectacular ways. It seems counter-intuitive to be as "vulnerable as possible," but I've been testing this out on some of my recent projects with much success. The silent failures, the mystery zombie states, have all but disappeared. Now when I make a mistake, my code dies a terrible death, but it is very obvious where the problem lies and easy to fix.
Join the discussion on Facebook in [Programming Philosophy]()