In a previous entry I talked about my EasyLogging infrastructure.
I’m even happier about it now.
I ended up writing some tests and code over the weekend and found myself wanting to track values during my development iterations.
With my log_debug
statements in the code I simply needed to establish the logging context in the test case to see what my value flows looked like.
At the beginning of the test file, test class, or even interspersed in the test method itself I do a
when I want logging to begin. At that point any messages that are logged will be written while the test proceeds. When I’m no longer interested, I can do a
to stop the logging output.
I started to use it consistently in my test cases, putting finish_logging in the teardown method, and then at the top of the test case logging to STDERR
if I wanted output or to nil
if I didn’t.
Coupled with the logging in the classes I’m writing (which are subjected to thresholding and directed to files in normal operation) this makes quick work of determining where things have gone awry when tests fail in subtle ways.
Fail in subtle ways? Isn’t that an indicator that more tests need to be written? Yes, of course! But in the real world, not all the tests that should get written actually do get written, despite our good intentions. The logging is especially useful in these situations to help identify untested conditions. If I turn on logging output, and then look at the code and a few lines of logged values, I can figure out what’s going on and whip together a test a lot more quickly than just wondering why the heck my code isn’t doing what I though it should.