Tag Archives: TDD

Global Day of Coderetreat 2014 – Dallas/Fort Worth

It is that time of year again. The time when the Global Day of Coderetreat date has been announced.

This year, the Global Day of Coderetreat is November 15th of 2014.

And with that, I am pleased to announce the Dallas/Fort Worth area’s participation in Global Day of Coderetreat.

For those who are unfamiliar with Coderetreat, it is a day of practice where developers of all levels and languages get together to engage in deliberate practice. The day is spent in multiple 45 minute sessions, working on the problem of Conway’s Game Of Life. During the sessions the goal is to pair program and use TDD to drive to a solution, while focusing on The Four Rules of Simple Design. At the end of each session, everybody deletes their work, and then has the opportunity to share an insight that they had during the previous session. As the day goes on, constraints get added to the sessions, with the goal to make you think about solving the problem in a different way.

I encourage anybody in the Dallas/Fort Worth area to sign up while space is available, and if you are located somewhere else, check the events listing on the Coderetreat site, and see if there is an event in your area. If not, I highly encourage you to get one setup for your area.

–Proctor

Global Day of Code Retreat thought dump

Last Saturday, December the 3rd, was host the Global Day of Code Retreat. There were 90 cities participating with over 2000 developers taking time to practice and hone their skills. I was lucky enough to be a co-organizer for the one in Dallas, and wanted to share my thoughts about what I saw that day. For those unfamiliar with what a Code Retreat is I urge you to go check out the Code Retreat site for more information.

There were a number of things that impressed and surprised me about last weekend. First I was impressed by how many people showed up to the Dallas Code Retreat, we had nearly 40 people show up, and this was with only about a two week notice for the event. To have that many people set aside a day, to come out and practice coding with less than a two week notice was highly impressive. I was also impressed by the diversity of the people that showed up, we had people of all levels show up and participate, as well as a good diversity of people from those who had never pair programmed or heard of TDD to those who pair program or TDD on a daily, or near daily basis.

As one of the co-coordinators, and having has participated in two previously, both after the SNCA Conference the last two years, I only paired in one session. I was more interested in letting everyone else have an opportunity to experience a Code Retreat in the hopes of getting this to be something we can have happen regularly in the Dallas/Fort Worth Metroplex.

As I wasn’t the facilitator, we had Glenn Vanderburg, to whom I offer a hearty thank you to yet again, but as the odd man out I was able to walk around and see what the other pairs were doing. It was interesting to compare what the other pairs were doing compared to some of the approaches me and my pairs had done in the two I have participated in. It was also interesting to see how expressive the code they were writing was depending on the interval of when I swung by.

Though on my one pairing session, which was in Ruby, I noticed something interesting. I am not sure if it was the tests and the order they were written, or if it is some of the elegance/syntactic sugar of Ruby itself that lead to the route taken. And after thinking about it, I am not sure that I like the end result, which I will express after I show what it was.

We were testing an Alive state object and the transition between Alive and Dead. The tests started with zero alive neighbors returning a Dead state object.

def transition(alive_neighbors_count)
  Dead.new
end

The next test we wrote was for one alive neighbor. Which passed as expected.

We then tested for two alive neighbors, and that it should transition to an Alive state.

def transition(alive_neighbors_count)
  return self if number_of_alive_neighbors == 2
  Dead.new
end

The next test was for three alive neighbors, and that the state should transition to an Alive state as well. The code then looked like the following.

def transition(alive_neighbors_count)
  return self if (2..3).include?(alive_neighbors_count)
  Dead.new
end

While the range object and the include? method is expressive compared to an if statement about the number being between 2 and 3, after I thought about it for a while, think it is the wrong type of expressive. We opted for the expressiveness in terms of being in a range instead of the expressiveness of the domain and the actual ruleset. Conway’s Game of Life talks about dying as if due to underpopulation and dying as if due to overpopulation, but nowhere did we wind up expressing this in the code, unlike the following which expresses that domain knowledge.

def transition(alive_neighbors_count)
  return Dead.new if underpopulated?(alive_neighbors_count)
  return Dead.new if overpopulated?(alive_neighbors_count)
  self
end

This is more intended as an example of the following food for thought: Is the expressivity of the language leading you away from coming up with a way of expressing the domain in the language and losing the expressivity of the domain?

Agile Testing with Lisa Crispin – Part 2

Here is Part 2 of my notes on Lisa Crispin’s talk on Agile Testing. If you haven’t go catch up on Part One.

Lisa noted that her team stopped committing at the sprint level. They just work hard, don’t waste time and just focus on delivering the software. This works by letting the customer know that you are working by being transparent to them.

Teams need time to learn, experiment, and need slack. Need to give the team time to innovate and catch up on the latest technology, as well as to have time to move to the latest technology.

Automated tests need as much care and feeding as the code.

She noted that by learning the business it helped cut down time dealing with production support. She noted that they found scenarios where they could automated support tasks, or were even solving the wrong business problems. Lisa gave an example where a user kept requesting a report and it was being delivered to the user as they understood it, but it took sitting down with the end user to actually understand the report as the user was actually requesting it.

Lisa made reference to look at: Daniel Pink and Intrinsic Motivators, The Agile Samurai by Jonathan Rasmusson, and Jim Heismith and Israel Gat and their research into measuring technical debt, and her article Selling Agile to the CFO.

The quote of the evening seemed to be: “If it doesn’t have to work, you don’t have to test it.”

Emphasized that QA shouldn’t be treated as separate from development; QA time is part of development time.

Lisa pointed out that the most value was not in the actual integration tests, but was in the communication between the developers and testers that resulted from the interaction.

If you have too many thing going on at the same time, you task switch too much, the result is that you have a hard time predicting when you are done.

Encouraged us to try to get away from labels and just try to deliver the best value and best quality software that you can.

Encourage cross pollination across different teams in the area. You never know where new ideas come from. She talked about how she brought back the idea of an impediment backlog from when she visited over in the UK. When she took this idea back to her team she noticed that just making the impediment visible helped the team address those issues. –This reminded me of the Craftsman Swap that both Obtiva and 8th Light encourage, as well as Corey Haines’ journeyman tour.

Agile Testing with Lisa Crispin – Part 1

This past Wednesday, Apr-20-2011, I attended the DFW Scrum meetup with guest Lisa Crispin, @lisacrispin, presenting over Skype, and I managed to take a wonderful 7 pages of notes in my composition book on her presentation. Because of this I will be breaking this up into a number of posts to help make it more digestible. I hope I didn’t butcher her talk up too much as I was busy trying to keep up with all of the gems she was throwing out to us. Apologies to Lisa if I did.

The big thing she started with was: before a team tries to go off and make any decisions, or do anything, they need to answer the question: “What does a commitment to quality mean?” Once answered, only then can they procede to improve the quality of their product.

On Reducing Show-Stoppers

The steps Lisa’s team went about reducing the number of show-stoppers they had in their product.

  • 1st they setup the basics: Continuous Integration and a dedicated test environment.
  • Once they had those in place, they setup a police light for show-stoppers. And anytime someone would report a show-stopper, that person then had to turn on the light. This had a two fold effect; it made the business person look silly if it was really a trivial bug, and it got annoying for the team if that light was constantly on.
  • Development started TDDing their code. She made a quick side note that TDD is hard to learn, and really, any test automation is hard to learn. She pointed out that it took the developers 8 months to get over the hump of TDD.
  • In the meantime, they wrote manual test scripts over the critical parts of the application. It was painful, and a great motivation for automating tests.
  • Got UI based automated tests running.
  • Worked to get functional automated tests instead of UI testing. Lisa mentioned that her team used Fitnesse.
  • They started with a happy path case, after they had that going they woud then add tests around the more boundary and error condition cases.
  • She noted that it took lots of baby steps over 8 years with a commitment to testing.

Testing is Not a Phase

The goal is a short feedback loop, as it is easier to recall the code an hour later as opposed to a month or two later. She noted that testers may be against this at first since it means testing the same thing multiple times, but that is important to shortening the feedback loop and improving the quality. I would also personally venture that it would help emphasize the importance of getting tests automated against a baseline set of expected functionality.

Lisa advised against calling a story done until all of the exploratory testing has been done.

She then pointed out some things to watch out for when planning. Watch out for overcommitting, since it usually doesn’t take into account the testing activities and anything they uncover. Also watch out for testing estimates that are not inline with development effort/estimate. Giving the example that if the testing effort is 2X the development effort, that may mean development might be missing something.

Continued…

I will be posting part two soon as this was only two-and-a-half pages of the seven pages of notes.

TDD: Clicker Training for Developers?

I started thinking about some of the bigger names in the developer community and how polarizing they can be due to their hard line positions on topics.  One of the topics that came to mind was Test Driven Development, and how the advocates of it almost always have a strict stance on the correct ways to approach it.  Outside-in or inside-out.  Only one assertion per test or assert one logical concept.  Mocks vs stubs. State vs behavior. TDD or BDD, or is there is really even a difference.

My wife pet-sits, and as she is cooking, so she likes to watch shows which cover training pets, similiar to how one might listen to podcasts as they drive to work.  I will occasionally over-hear, or catch parts of these shows myself, usually while helping her in the kitchen. I also recently read Switch: How to Change When Change is Hard, by Chip and Dan Heath which has a section discussing the importance of reinforcing positive behavior when trying to encourage change and establish habits.  Thinking about the hard-liners and TDD, I realized Kent Beck created the perfect “training clicker” for developers.  Whether this was intentional or not at the time will be something I leave for him to answer.

To train by positive reinforcement, one has to capture the desired behavior and immediately reward it.  When one test drives their code they are encouraged to run their test after each change to see if their change works.  As unit tests are supposed to be fast, this gives the user immediate feedback to know if what they did worked or did not.

The majority of the test runners use either one of two words depending on the result of the test: success or failure.  These two words are very emotionally charged.  Combine this with the fact that they are usually printed in all caps and followed by a number of exclamation points results in output like:
SUCCESS!!!
or
FAILURE!!!

Can’t you just see the emotions getting charged.

Add on to this a graphical user interface for the test runner, or even add ons which change the console text color depending on the end state of the test runs, which use the colors green and red for successes and failures respectively, and you get even more emotional resonance.  You have now gone from the above results to something along the lines of
SUCCESS!!!
or
FAILURE!!!

How is that for evoking an emotional response? I believe I have even heard Kent Beck talking about the thrill he sees in seeing the status bar turn green.

Also, the proponents of TDD encourage small units of work.  The reason being, when you work in minimal units of change you know pretty much exactly what caused the test to fail.  There ends up being a hidden effect to this though.  When you work in small units between test runs, that behavior is now getting reinforced even more frequently, and engraining that behavior more deeply.  Do this enough and that behavior will eventually turn into a habit. And our self-serving egos love to rationalize why our habits are the right thing to be doing, lest we allow ourselves to realize we might be acting wrongly.

And I do not think this just applies to those that are strong proponents for TDD.  Do we ever consider that someone who is a strong opponent may have been negatively reinforced by TDD?  Might they have tried on their own with no guidance and gotten frequent feedback of failures?  Maybe they tried at the wrong level of abstraction, or on a codebase that was not designed with testability in mind. Maybe the test runner just kept giving them negative reinforcement on what they were doing until they decided that TDD is a waste of time.

I am putting this idea out there not to cast judgement against TDD, as it is a practice that I believe has a large amount of value to it, and would love to get good at, but as a something to think about.  Maybe this will help the each side see why the other side might feel they way they do about TDD.

I would love to know your thoughts on this.