The very latest thinking from us on BDD, Continuous Delivery, and more.

Yes, and… by Liz

Published Sun 5 Mar 2017 14:38

Imagine two actors standing on stage. “I like your penguin,” the first says. The other turns round, looking at the empty space where one might imagine a penguin could be following.

There are two things that can happen.

Perhaps the second actor frowns at the empty space. “I don’t have a penguin,” they say. “Oh,” says the first, and that’s it. The scene is dead.

Perhaps the second actor goes along with it. “Why, thank you!” they exclaim. “I only got it yesterday. Rockhoppers, I tell you, you got to get one of these…” And the scene continues, and now it’s funny, because it has a penguin in it, and penguins are funny – especially imaginary ones. The humour emerges, with jokes that are unexpected, both to the audience and the participants!

This is the principle of “Yes, and…” that forms the basic rule-of-thumb of Improv.

It’s also one of the most important principles for Agile transformation… or any kind of change involving people. This post is about why, and how to spot the principle at work and support it.

Complexity has  Disposition, not Predictability

In a complex space, where cause and effect are only correlated in retrospect, we’re bound to make unexpected discoveries. Anything we do in that space needs to be safe-to-fail. Things that we try out that might improve things and which are safe-to-fail are called probes.

As leaders, our temptation is to come in and start poking the space, persuading people to pick up new ideas and try them out. Maybe that’s safe-to-fail; maybe it isn’t.

The people best positioned to know aren’t the coaches and consultants brought in for the transformation. They’re not the leaders and the managers. They’re the people on the ground, who’ve been working in that context for a while. They know what’s safe and what’s not. They know what’s supported and what isn’t. If you listen to the stories that people tell then you’ll hear about what works and what doesn’t… and those stories play into the space, too, and the people on the ground have been listening to them for a while.

They might not know what’s guaranteed to succeed. I once asked a group whether a football game counted as complicated (predictable) or complex (emergent). One of the attendees asked, “Is England playing?”

I think I might have made a face at that! However, it gave me a useful place to explain the difference between predictability and disposition. Just because the England team seem unlikely to win the football match doesn’t mean that they can’t. They don’t lose every match, after all! So perhaps the world of football is disposed against England winning the world cup.

However, we’re disposed for good beer and a beautiful summer with at least a bit of sunshine, so not everything is bad.

In our organizations, different probes will be differently disposed to succeed or fail, and to require amplification or dampening accordingly. The thing about human organizations, though, is that they are Complex Adaptive Systems, or CAS, in which the agents of the system (in this case, humans) can change the system itself.

And they already are.

People are generally interested in doing things better. They’re always trying things out, and some of those things will be succeeding.

The most important thing we can do as coaches and consultants is to encourage those successful probes; to spot those things which are succeeding and amplify them, and to make safe spaces in which more probes can be carried out, so that we can see more success and amplify it again.

Amplifying means taking what’s already in place (yes!) and building on it (and…).

A Really Common Pattern: “That won’t work because…”

Humans have a real tendency to see patterns which aren’t there, and to spot failure more than success. We don’t like failure. It makes us feel bad. We tend to avoid it, and when we see failure scenarios we invoke one big pattern in our head: That won’t work because…

There’s one word that’s short-hand for that tendency to see and avoid failure.

“I want to learn TDD, but it will take too long.”

“We could co-locate the team, but we’d need to find desk space.”

“We’ve got some testers in India, but they’re not as good as we need them to be.”

The word “but” negates everything that comes before it. It’s the opposite of building on what’s already there.  What might happen if, instead, we use the word “and”?

“I want to learn TDD, and I’ll need to set aside some time for that.”

“We could co-locate the team, and we could look for desk space to do that.”

“We’ve got some testers in India, and we need to help them improve.”

Now, instead of shutting down ideas, be they our own or other people’s, we’re looking at possibilities for change. Just changing the language that we use in our day-to-day work can help people to spot things they could try out.

We already use “Yes, And…” in TDD

The TDD cycle is really simple:

  • Write a failing test
  • Make it pass
  • Refactor

However, when we’re dealing with legacy code, which has behaviour that we’re interested in and want to keep, the first thing we do is to amplify and anchor what’s already in place! We write a passing test over the existing code, then refactor with that safety in place, then we can add the new failing test and the change in behaviour.

  • Write a passing test (Yes!)
  • Refactor
  • Write a failing test (And…)
  • Make it pass.

We can do this with our co-workers, our teams and our organizations as well. Of course, we don’t refactor people! Refactoring code, though, usually means separating concerns and focusing on responsibilities. In human terms, that means anchoring the behaviour or probe that’s working, and helping people or organizations focus on their strengths and differentiators. This helps to outline the disposition of the space too, so that whatever we try next has a good chance of working.

Of course, in a complex space, a “passing test” might not always be possible, depending on what we discover. At least, though, by anchoring what we value, we’ve made sure that it won’t be any worse. If we keep the investment in our probes small, it’s safe-to-fail.

  • Anchor valuable behaviour (Yes!)
  • Recognize and celebrate strengths
  • Design the next probe, or just encourage people with more context to do that (And…)
  • Give it a try.

Tim Ferris’s book, “The 4-hour Work Week”, is all about this; focusing on our strengths and letting other people cover our weaknesses, so that we can change our lives to allow ourselves to learn new things.

The Feedback Sandwich is a “Yes, and…” pattern

You’ve probably come across the feedback sandwich before:

  • Say something good
  • Say something bad
  • Say something good

It’s got some other, less polite names, and a poor reputation, so I’d like to show you a different way to think about it. Here’s what we’re really doing:

  • Anchoring valuable behaviour
  • Suggesting improvements
  • Outlining a good place we’d like to get to.

Human beings are complex, and we make complex systems. It’s entirely possible that the good place we might like to get to won’t be easy, or we’ll make discoveries that mean we need to change direction. So that good place we want to get to is actually just an example of coherence; a realistic reason for thinking a probe might have a positive impact. We can think of a good thing that might result from trying out that improvement.

The thing is, people are adaptive. If you can outline a place you’re trying to get to, and it’s good, and other people want to get there, and you’ve already made them feel safe by anchoring the valuable behaviour, then they’ll make their own improvements, and probably try things you didn’t even think of. You don’t need the bit in the middle. People know themselves better than you can, and they know their own disposition. They know what feels safe, and possible. By giving them a suggestion of a good place to get to, we’re freeing them up to find their own way to get there… and they might end up going to a different place that’s better than before, and that’s good too.

In fact, just making sure that people’s most valuable behaviour is appreciated can be a really great way to create change. “Thank you” is a very simple way of doing that, so remember to use those two words a lot! “Thank you” is the “Yes!” of personal feedback.

It’s All About Safety.

improving-agile-teamsThis is the book which prompted this whole post.

In “Improving Agile Teams”, Paul Goddard dedicates a number of chapters to the concepts of safety, failure and fear of failure. I highly recommend this book, especially the simple exercises which are designed to help a team adopt principles that will help to make things safe.

It’s possible to look at an entire Agile transformation through this lens, too.

Typically, an organization starts adopting Agile at the team level, just in software development, often by taking on Scrum or some form of Kanban. It’s pretty easy to do, because we’ve got a lot of tools which enable us to change our software safely. BDD and TDD provide living documentation and help to anchor the behaviour that we care about. Our build pipelines and continuous integration makes sure that whatever we’ve got working stays working. Of course, we can’t quite ship yet! That’s because shipping to production is still a bit of a commitment, and it isn’t safe.

But* now it’s safe for Development in an organization to get it wrong.

Gradually we adopt practices alongside IT Operations which enable us either to fail in a place where it’s safe – making our test systems more like production – or which enable us to roll back in case of failure. Of course, once you’ve got a production-like test system that can be destroyed and rebuilt as required, it’s not a big step to actually make production systems that can be destroyed and rebuilt easily. Jez Humble and Dave Farley’s book, “Continuous Delivery”, is all about this (they call them “Phoenix Servers”, which gives me an excuse to shout-out to “The Phoenix Project” by Gene Kim et al, too; an excellent novel on the same theme).

So now we have DevOps, and it’s safe for the whole of IT to get it wrong.

As we start shipping smaller pieces of functionality more and more smoothly, the Business starts to pay attention. “Well… could I try this out?” someone says, and suddenly we’ve made it OK for them to do probes too, finding out what the market and their customers do and don’t want.

Now we’ve made it safe for the business to get it wrong.

At this stage we can really start churning out probes as a whole organization. There’s a really weird thing that happens in complexity, though. Sometimes people use things in a way that they weren’t intended to be used.

I often tell the story of Ludicorp and their online game, “Neverending”. They built tools for the players to share screenshots. They weren’t just used for screenshots, though, and those tools became the foundation of Flickr. Google started as a search engine. Amazon sold books. These companies don’t do what they started out doing. This is “exaptation”. It’s people, taking what we created (yes!) and using them for something else (and…)

It’s what makes it OK for us to have not discovered everything yet. It’s OK for humanity to be wrong.

This is how I always think of the Agile Fluency Model ™; with these levels of making it OK for people to be wrong. An Agile transformation is all about making it OK for an organization to get things wrong. If we think of it this way, it becomes obvious that safety is going to be a truly important part of that.

If you can’t think of a probe to try that might improve things and which would be safe-to-fail, maybe you can think of a probe to try that increases levels of safety. Something which puts in place the “Yes!” so that the “And…” can follow later. Something which helps people reduce their investments and commitments, providing themselves with options, or helping them deliberately discover information while it’s still cheap to change.

Make it Safe for Yourself, Too

*You can usefully use the word “But” to negate negatives. That’s about all it’s good for. It took me a fair few months to adopt the “Yes, and…” habit, and even I still use “but” occasionally, because I’m human. I slip up. I make mistakes.

So will you.

Celebrate your successes and your strengths, and make it safe for yourself to fail too Forgiveness is pretty much the greatest gift you can give yourself… or anyone else.

It lets you focus on the “Yes!” of your life so that you, too, can have “and…” in it.

This article was originally published here.

Breaking Boxes by Liz

Published Mon 7 Nov 2016 11:38

I love words. I really, really love words. I like poetry, and reading, and writing, and conversations, and songs with words in, and puns and wordplay and anagrams. I like learning words in different languages, and finding out where words came from, and watching them change over time.

I love the effect that words have on our minds and our models of our world. I love that words have connotations, and that changing the language we use can actually change our models and help us behave in different ways.

Language is a strange thing. It turns out that if you don’t learn language before the age of 5, you never really learn language; the constructs for it are set up in our brains at a very early age.

George Lakoff and Mark Johnson propose in their book, “Metaphors we Live By”, that all human language is based on metaphorical constructs. I don’t pretend to understand the book fully, and I believe there’s some contention about whether its premise truly holds, but I still found it a fascinating book, because it’s about words.

There was one bit which really caught my attention. “Events and actions are conceptualized metaphorically as objects, activities as substances, states as containers… activities are viewed as containers for the actions and other activities that make them up.” They give some examples:

I put a lot of energy into washing the windows.

Outside of washing the windows, what else did you do?

This fascinated me. I started seeing substances, and containers, everywhere!

I couldn’t do much testing before the end of the sprint.

As if “testing” was a substance, like cheese… we wanted 200g of testing, but we could only get 100g. And a sprint is a timebox – we even call it a box! I think in software, and with Agile methods, we do this even more.

The ticket was open for three weeks, but I’ve closed it now.

How many stories are in that feature?

It’s outside the scope of this release.

Partly I think this is because we like to decompose problems into smaller problems, because that helps us solve them more easily, and partly because we like to bound our work so that we know when we’re “done”, because it’s satisfying to be able to take responsibility for something concrete (spot the substance metaphor) and know you did a good job. There’s probably other reasons too.

There’s only one problem with dividing things into boxes like this: complexity.

In complex situations, problems can’t be decomposed into small pieces. We can try, for sure, and goodness knows enough projects have been planned that way… but when we actually go to do the work, we always make discoveries, and the end result is always different to what we predicted, whether in functionality or cost and time or critical reception or value and impact… we simply can’t predict everything. The outcomes emerge as the work is done.

I was thinking about this problem of decomposition and the fact that software, being inherently complex, is slightly messy… of Kanban, and our desire to find flow… of Cynthia Kurtz’s Cynefin pyramids… and of my friend and fellow coach, Katherine Kirk, who is helping me to see the world in terms of relationships.

It seemed to me that if a complex domain wasn’t made up of the sum of its parts, it might be dominated by the relationship between those parts instead.  In Cynthia Kurtz’s pyramids, the complex domain is pictured as if the people on the ground get the work done (self-organizing teams, for instance) but have a decoupled hierarchical leader.

I talked to Dave Snowden about this, and he pointed me at one of his newer blog posts on containing constraints and coupling constraints, which makes more sense as the hierarchical leader (if there is one!) isn’t the only constraint on a team’s behaviour. So really, the relationships between people are actually constraints, and possibly attractors… now we’re getting to the limit of my Cynefin knowledge, which is always a fun place to be!

Regardless, thinking about work in terms of boxes tends to make us behave as if it’s boxes, which tends to lead us to treat something complex as if it’s complicated, which is disorder, which usually leads to an uncontrolled dive into chaos if it persists, and that’s not usually a good thing.

So I thought… what if we broke the boxes? What would happen if we changed the metaphor we used to talk about work? What if we focused on people and relationships, instead of on the work itself? What would that look like?

Let’s take that “testing” phrase as an example:

I couldn’t do much testing before the end of the sprint.

In the post I made for the Lean Systems Society, “Value Streams are Made of People”, I talked about how to map a value stream from the users to the dev team, and from the dev team back to the users. I visualize the development team as living in a container. So we can do the same thing with testing. Who’s inside the “testing” box?

Let’s say it’s a tester.

Who’s outside? Who gets value or benefits from the testing? If the tester finds nothing, there was no value to it (which we might not know until afterwards)… so it’s the developer who gets value from the feedback.

So now we have:

I couldn’t give the devs feedback on their work before the end of the sprint.

And of course, that sprint is also a box. Who’s on the inside? Well, it’s the dev team. And who’s on the outside? Why can’t the dev team just ship it to the users? They want to get feedback from the stakeholders first.

So now we have:

I couldn’t give the devs feedback on their work before the stakeholders saw it.

I went through some of the problems on PM Stackexchange. Box language, everywhere. I started making translations.

Should multiple Scrum teams working on the same project have the same start/end dates for their Sprints?


Does it help teams to co-ordinate if they get feedback from their stakeholders, then plan what to do next, at the same time as each other?

Interesting. Rephrasing it forced me to think about the benefits of having the same start/end dates. Huh. Of course, I’m having to make some assumptions in both these translations as to what the real problem was, and with who; there are other possibilities. Wouldn’t it have been great if we could have got the original people experiencing these problems to rephrase them?

If we used this language more frequently, would we end up focusing a little less on the work in our conceptual “box”, and more on what the next people in the stream needed from us so that they could deliver value too?

I ran a workshop on this with a pretty advanced group of Kanban coaches. I suggested it probably played into their explicit process policies. “Wow,” one of them said. “We always talk about our policies in terms of people, but as soon as we write them down… we go back to box language.”

Of course we do. It’s a convenient way to refer to our work (my translations were inevitably longer). We’re often held accountable and responsible for our box. If we get stressed at all we tend to worry more about our individual work than about other people (acting as individuals being the thing we do in chaos) and there’s often a bit of chaos, so that can make us revert to box language even more.

But I do wonder how much less chaos there would be if we commonly used language metaphors of people and relationships over substance and containers.

If, for instance, we made sure the tester had what they needed from us devs, instead of focusing on just our box of work until it’s “done”… would we work together better as a team?

If we realised that the cost might be in the people, but the value’s in the relationships… would we send less work offshore, or at least make sure that we have better relationships with our offshore team members?

If we focused on our relationship with users and stakeholders… would we make sure they have good ways of giving feedback as part of our work? Would we make it easier for them to say “thank you” as a result?

And when there’s a problem, would a focus on improving relationships help us to find new things to try to improve how our work gets “done”, too?

This article was originally published here.

How do you terminate a project in your org? by Liz

Published Mon 18 Jul 2016 08:46

We all know that when we do something new, for the first time, we make discoveries; and all software projects (and in fact change efforts of any variety) target something new.

(You can find out what that is by asking, “What will we be able to do when this is done that we can’t do right now? What will our customers, our staff or our systems be able to do?” This is the differentiating capability. There may be more than one, especially if the organization is used to delivering large buckets of work.)

Often, though, the  discoveries that are made will slow things down, or make them impossible. Too many contexts to consider. Third parties that don’t want to co-operate, let alone collaborate. A scarcity of skills. Whatever we discover, sometimes it becomes apparent that the effort is never going to pay back for itself.

Of course, if you’ve invested a lot of money or time into the effort, it’s tempting to keep throwing more after it: the sunk-cost fallacy. So here are three questions that orgs which are resilient and resistant to that temptation are able to answer:

  1. How can you tell if it’s failing?
  2. What’s the process for terminating or redirecting failing efforts?
  3. What happens to people who do that?

If you can’t answer those questions proudly for your org, or your project, you’re probably over-investing… which, on a regular basis, means throwing good money after bad, and wasting the time and effort of good people.

Wouldn’t you like to spend that on something else instead?



This article was originally published here.

Make Everything The Same by Sandi Metz

Published Thu 9 Jun 2016 12:23

This post originally appeared in my Chainline Newsletter. Due to popular request, I'm re-publishing it here on my blog. It has been lightly edited.

As part of my local ruby meetup (#westendruby), I've been dabbling in katas and quizzes. Having worked several, I can't help but notice that my solutions are sometimes radically different from the others.


Having one's solutions differ from the crowd's is (and ought to be) a cause for heightened scrutiny. Therefore, I've been pondering code, trying to understand what's driving my choices. I have a glimmer of an idea, and thus, this newsletter.

The Setup

The easiest way for me to explain is for you to first go do the Roman numerals kata. Happily, there's a Roman numerals test on exercism to get you started. The task is to convert Arabic numbers into Roman numerals, and the tests are all some form of:

assert_equal 'I', 1.to_roman


assert_equal 'CMXI', 911.to_roman

In case your life is such that you can't drop everything and do that exercise right now, here's a reminder of how the Roman counting system works. There are two main ideas. First, a few specific numbers are represented by letters of the Roman alphabet. Next, these letters can be combined to represent other numbers.

Seven Roman letters are used. The letters and their associated values are:

  • I = 1
  • V = 5
  • X = 10
  • L = 50
  • C = 100
  • D = 500
  • M = 1,000

1-10 in the Arabic numbering system is I, II, III, IV, V, VI, VII, VIII, IX, and X in Roman numerals. As you may already know, there are two rules at work.

1, 2 and 3 illustrate the first rule. 1 is one I. 2 is two Is. 3 is three Is. The rule is: for Arabic value x, select the largest Roman letter that's less than x, and generate x number of copies. Let's call this the 'additive' rule.

4 follows the second rule. Instead of IIII (four I's), 4 is written as IV (one less than five). This rule is: select the first Roman letter higher than the Arabic value, and prefix it with the adjacent lower letter. This rule is used in cases where 4 sequential occurrences of any letter would otherwise be necessary. Thus, 4 is IV instead of IIII, 9 is IX instead of VIIII, etc. Let's call this the 'subtractive' rule.

Now, consider the code needed to satisfy this kata. Given that there are two rules, it seems as if there must be two cases. To handle the two cases, it feels like the code will need a conditional that has two branches, one for each rule.

The actual implementation code might be more procedural (the conversion logic could be hard-coded into the branches of the conditional) or more object-oriented (you could create an object to handle each rule and have a conditional somewhere to select the correct one), but regardless of whether you write a procedure or use OO composition, there's still a conditional.

The Insight

I hated this. Not only did I not want the conditional, but figuring out when to use which rule seemed like a royal PITA. It felt like the conditional would need to do something clever to select the correct rule, and I wasn't feeling particularly quick-witted. Thus, I found myself pondering this kata with a faint sense of dread, while the meetup loomed.

Regardless, I sat down to write some code, and immediately realized that although I was faintly aware that there were two conversion rules, I didn't know the full set of Roman letters and their associated Arabic values. I then consulted the wikipedia page for Roman numerals, where I found something which gave me a dramatically simpler view the problem. Serious lightbulb moment.

It turns out that the way we think about Roman numerals today is the result of an evolutionary process. In the beginning, they were uniformly additive. 4 was written as IIII and 9, VIIII. As time passed, the subtractive form crept in. 4 became IV, and 9, IX. In modern times we consistently use the shorter, subtractive form, but in Roman times it was common to see one form, or the other, or a combination thereof.

This means that the additive form is a completely legitimate kind of Roman numeral. (Who knew?) It can be produced in its entirety by rule 1, which is comfortingly simple to implement. The conversion from additive to subtractive is also dead easy, and can be accomplished via a simple mapping that encodes rule 2.

The key insight here is that converting from Arabic to additive Roman is one idea, and converting from additive to subtractive Roman is quite another. Solving this kata by converting Arabic numbers directly into subtractive Roman skips a step, and conflates these two ideas. It is this conflation that dooms us to the conditional.

Having had this realization, I wrote two simple bits of code. One converted Arabic to additive Roman, the other additive to subtractive Roman. Used in combination, they pass the tests.

I took the code to #westendruby, where someone pointed out that not only was my variant more understandable than many other implementations, but also that it could easily be extended to perform the reverse conversion. They were absolutely right; it took just a few lines of additional code to convert from Roman numerals back into Arabic numbers. Adding this new feature to other implementations was far more difficult.

I wrote several versions of the kata. Here's the one I ended up liking the best.

The Upshot

I left that meetup with a newfound respect for what it means to have a conditional.

Conditionals are trying to tell you something. Sometimes it is that you ought to be using composition, i.e., that you should create multiple objects that play a common role, and then select and inject one of these objects for use in place of the conditional. Composition is the right solution when a single abstract concept has several concrete implementations.

However, rule 1 and rule 2 above don't represent alternative implementations of the same concept, instead they represent two entirely unrelated ideas. The solution here is therefore not composition, but instead to create two transformations, and apply them in order. This lets you replace one "special" case with two normal ones, and reap the following benefits:

  • The resulting code is more straightforward.
  • The tests are more understandable.
  • The code can produce the pure additive form of Roman numerals, in addition to the subtractive one.
  • The code is easily extended to do the reverse conversion.

The keystone in this arch of understanding is being comfortable with transformations that appear to do nothing. It is entirely possible for a Roman numeral to look identical in its additive and subtractive forms. III for example, looks the same either way. Regardless, the additive III must be permitted to pass unhindered through the transformation to subtractive. You can't check to see if it needs to be converted, instead you must blithely convert it. This makes everything the same, and it is sameness that gets rid of the conditional.

The Commentary

Now, if you'll permit, I'll speculate. I'm interested in why this solution occurred to me, but not others. Folks at the meetup found it startling in its simplicity and utility. Once known, it seems inevitable, but before knowing, inconceivable.

What would someone have to know in order to be able to dream up this solution? How can we teach OO so that folks learn to look at similar problems and recognize the underlying concepts? What quality in my background or style of thinking revealed them to me? Mind you, I'm not saying that my solution is perfect, but it's certainly different. Why?

I think there are two reasons. First, I'm committed to simplicity. I believe in it, and insist upon it. I am unwilling to settle for complexity. Simplicity is often harder then complexity, but it's worth the struggle, and everything in my experience tells me that with enough effort, it's achievable. I have faith.

Next, the desire for simplicity means that I abhor special cases. I am willing to trade CPU cycles to achieve sameness. I'll happily perform unnecessary operations on objects that are already perfectly okay if that lets me treat them interchangeably. Code is read many more times that it is written, and computers are fast. This trade is a bargain that I'll take every time.

Insist on simplicity.
Resist special cases.
Listen to conditionals.
Identify underlying concepts.
And search for the abstractions that let you treat everything the same.

Thanks for reading,



Public POOD course coming to New York City!

I'm pleased to announce that I'll be teaching a public Practical Object-Oriented Design course in New York City on Oct 31-Nov 2, 2016, fondly named POODNYC. And I'm also delighted that Avdi Grimm, Head Chef at Ruby Tapas, will again be the co-instructor.

Tickets are on sale now!

99Bottles Book

My new 99 Bottles book is out in private beta, and is slated for general release Any Day Now ™. News and discounts will be announced on the 99Bottles mailing list. This list has almost zero traffic, so you won't hate yourself if you sign up today.

This article was originally published here.

On Learning and Information by Liz

Published Tue 31 May 2016 15:32

This has been an interesting year for me. At the end of March I came out of one of the largest Agile transformations ever attempted (still going, surprisingly well), and learned way more than I ever thought possible about how adoption works at scale (or doesn’t… making it safe-to-fail turns out to be important).

The learning keeps going. I’ve just done Sharon L. Bowman’s amazing “Training from the Back of the Room” course, and following the Enterprise Services Planning Executive Summit, I’ve signed up for the five-day course for that, too.

That last one’s exciting for me. I’ve been doing Agile for long enough now that I’m finding it hard to spot new learning opportunities within the Agile space. Sure, there’s still plenty for me to learn about psychology,  we’re still getting that BDD message out and learning more all the time, and there’s occasional gems like Paul Goddard’s “Improving Agile Teams” that go to places I hadn’t thought of.

It’s been a fair few years since I experienced something of a paradigm shift in thinking, though. The ESP Summit gave that to me and more.

Starting from Where You Are Now

Getting 50+ managers of MD level and up in a room together, with relatively few coaches, changes the dynamic of the conversations. It becomes far less about how our particular toolboxes can help, and more about what problems are still outstanding that we haven’t solved yet.

Of course, they’re all human problems. The thing is that it isn’t necessarily the current culture that’s the problem; it’s often self-supporting structures and systems that have been in place for a long time. Removing one can often lead to a lack of support for another, which cascades. Someone once referred to an Agile transformation at a client as “the worst implementation of Agile I’ve ever seen”, and they were right; except it wasn’t an implementation, but an adoption. Of course it’s hard to do Agile when you can’t get a server, you’ve got regulatory requirements to consider, you’ve got five main stakeholders for every project, nobody understands the new roles they’ve been asked to play and you’re still running a yearly budgeting cycle – just some of the common problems that I’ve come across in a number of large clients.

Unless you’ve got a sense of urgency so powerful that you’re willing to risk throwing the baby out with the bathwater, incremental change is the way to go, but where do you start, and what do you change first?

The thing I like most about Kanban, and about ESP, is that “start from where you are now” mentality. Sure, it would be fantastic if we could start creating cross-functional teams immediately. But even if we do that, in a large organization it still takes weeks or months to put together any group that can execute on the proposed ideas and get them live, and it’s hard to see the benefits without doing that.

There’s been a bit of a shift in the Agile space away from the notion that cross-functional teams are necessarily where we start, which means we’re shifting away from some of the core concepts of Agile itself.

Dan North and Chris Matts, my long-time friends and mentors, have been busy creating a thing called Business Mapping, in which they help organizations match their investments and budgets to the capacity they actually have to deliver, while slowly growing “staff liquidity” that allows for more flexible delivery.

Enterprise Services Planning achieves much the same result, with a focus on disciplined, data-driven change that I found challenging but exciting: firstly because I realise I haven’t done enough data collection in the past, and secondly because it directs leaders to trust maths, rather than instincts. This is still Kanban, but on steroids: not just people working together in a team, but teams working together; not just leadership at every level, but people using the information at their disposal to drive change and experiment.

The Advent of Adhocracy

Professor Julian Birkenshaw’s keynote was the biggest paradigm shift I’ve experienced since Dave Snowden introduced me to Cynefin, and those of you who know how much I love that little framework understand that I’m not using the phrase lightly.

Julian talks about three different ages:

The Industrial Age: Capital and labour are scarce resources. Creates a bureaucracy in which position is privileged, co-ordination achieved by rules, decisions made through hierarchy, and people motivated by extrinsic rewards.

The Information Age: Capital and labour are no longer scarce, but knowledge and information are. Creates a meritocracy in which knowledge is privileged, co-ordination achieved by mutual adjustment, decisions made through logical argument and people motivated by personal mastery.

The Post-Information Age: Knowledge and information are no longer scarce, but action and conviction are. Creates an adhocracy in which action is privileged, co-ordination is achieved around opportunity, decisions are made through experimentation and people are motivated by achievement.

As Julian talked about this, I found myself thinking about the difference between the start-ups I’ve worked with and the large, global organizations.

I wondered – could making the right kind of information more freely available, and helping people within those organizations achieve personal mastery, give an organization the ability to move into that “adhocracy”? There are still plenty of places which worry about cost per head, when the value is actually in the relationships between people – the value stream – and not the people as individuals. If we had better measurements of that value, would it help us improve those relationships? Would we, as coaches and consultants, develop more of an adhocracy ourselves, and be able to seize opportunities for change as and when they become available?

I keep hearing people within those large organizations make comments about “start-up mindset” and ability to react to the market, but without having Dan and Chris’s “staff liquidity”, knowledge still becomes the constraint, and without having quick information about what’s working and what isn’t, small adjustments based on long-term plans rather than routine experimentation around opportunity becomes the norm.

So I’m going off to get myself more tools, so that I can help organizations to get that information, make sense of it, and create that flexibility; not just in their products and services, but in their changes and adoptions and transformations too.

And I’ll be thinking about this new pattern all the time. It feels like it fits into a bunch of other stuff, but I don’t know how yet.

Julian Birkenshaw says he has a book out next year. I can’t wait.

This article was originally published here.

Correlated in Retrospect by Liz

Published Mon 9 May 2016 21:12

A few  years back, I went to visit a company that had managed to achieve a high level of agility without high levels of coaching or training, shipping several times a day. I was curious as to how they had done it. It turned out to be a product of a highly experimental culture, and we spent a whole day swapping my BDD knowledge for their stories of how they managed to reach the place they were in.

While I was there, I saw a very interesting graph that looked a bit like this:


“That’s interesting,” I said. “Is that your bug count over time? What happened?”

“Well,” one of them said, “we realised our bug count was growing, so we hired a new developer. We thought we’d rotate our existing team through a bug-fixing role, and we hypothesized that it would bring the bug count down. And it worked, for a while – that’s the first dip. It worked so well, we thought we’d hire another developer, so that we could rotate another team member, and we thought that would get rid of the bugs… but they started going up again.”

“Ah,” I said wisely. “The developer was no good?” (Human beings like to spot patterns and think they understand root causes – and I’m human too.)

“Nope.” They were all smiling, waiting for me to guess.

“Two new people was just too many? They got complacent because someone was fixing the bugs? The existing team was fed up of the bug-fixing role?” I ran through all the causes I could think of.


“All right. Who was writing the bugs?” I asked.


I was confused.

“The bugs were already there,” one of them explained. “The users had spotted that we were fixing them, and started reporting them. The bug count going up… that was a good thing.”

And I looked at the graph, and suddenly understood. I didn’t know Cynefin back then, and I didn’t understand complexity, but I did understand perverse incentives, and here was a positive version. In retrospect, the cause was obvious. It’s the same reason why crime goes up when policemen patrol the streets; because it’s easier to report it.

Conversely, a good way to have a low bug count is to make it hard to report. I spent a good few years working in Waterfall environments, and I can remember the arguments I had about whether something in my work was genuinely a bug or not… making it much harder for anyone testing my code, which meant I looked like a good developer (I really wasn’t).

Whenever we do anything in a complex system, we get unexpected side-effects. Another example of this is the Hawthorne effect, which goes something like this:

“Do you work better in this factory if we turn the lights up?”


“Do you work better if we turn the lights down?” (Just checking our null hypothesis…)


“What? Um, that’s confusing… do you work better with the lights up, or down?”

“We don’t care; just stop watching us.”

We’ve all come across examples of perverse incentives, which are another kind of unintended consequence. This is what happens when you turn measurements into targets.

When you’re creating a probe, it’s important to have a way of knowing it’s succeeding or failing, it’s true… but the signs of success or failure may only be clear in retrospect. A lot of people who create experiments to try get hung up on one hypothesis, and as a result they obsess over one perceived cause, or one measurement. In the process they might miss signs that the experiment is succeeding or failing, or even misinterpret one as the other.

Rather than having a hypothesis, in complexity, we want coherence – a realistic reason for thinking that the probe might have a good impact, with the understanding that we might not necessarily get the particular outcome we’re thinking of. This is why I get people creating probes to run through multiple scenarios of success or failure, so they think about what things they might want to be watching, or how they can put appropriate signals in place, to which they can apply some common sense in retrospect.

As we’ve seen, watching is itself an intervention… so you probably want to make sure it’s safe-to-fail.

This article was originally published here.

Extreme YAGNI: How BDD nails your prototyping stage by Chris

Published Wed 4 May 2016 08:51


Sometimes people don’t see the value in the BDD process. They contend that the BDD ceremonies are a waste of time, and get in the way of delivering real features to customers. Others cannot see how to apply BDD to their project, as no-one knowns exactly what the project will look like yet. As they’re only in the prototyping stage, by the time a feature file is written and made executable, it’s already out of date.

I don’t agree with this. If our process is set up right, we can prototype using just as effectively and retain the collaboration benefits that BDD gives us.

You Ain’t Gonna Need It

One of the biggest wins that Test-driven Development (TDD) gives us is the principle of YAGNI - “You Ain’t Gonna Need It”. It’s very tempting when writing code to go off on a tangent and produce a beautiful structured work of art that has zero practical use. TDD stops us doing this by forcing us only to write code that a test requires. Even some experts who don’t practice or encourage TDD often espouse the power of writing the calling code first in order to achieve much the same effect.

BDD gives us the same YAGNI win: but at a level higher than TDD. With the BDD cycle we’re adding thin slices of customer observable behaviour to our systems. If we can only write the code thats directly used by the business, then in theory we should be cutting down on wasteful development time.

However, there’s a snag here. if we’re prototyping, we don’t know whether this feature will make it into the final product. We still need to give feedback to our product team, so we need to build something. If the feature is complex, it might take a while to build it, and the feature might never get used. Why bother going through the process of specifying the feature using BDD and Cucumber features?

Happily, we can take YAGNI a level further to help us out.

Extreme YAGNI

Often in TDD, and especially when teaching it, I will encourage people to take shortcuts that might seem silly in their production code. For example, when writing a simple supermarket checkout class in Javascript, we might start with a test like this:

    var checkout = new Checkout();

Our test defines our supermarket checkout to have a total of zero on creation. One simple way to make this work would be to define the following class:

    var Checkout = function() { = function() {
        return 0;

You might think that’s cheating, and many people define a member variable for total, set it to 0 in the constructor, and miss this step out entirely. There is however an important principle at stake here. The code we have does exactly what the test requires it too. We may not need a local variable to store total at all.

Here’s the secret: we can practice this “extreme YAGNI” at the level of our features, too. If there’s a quick way to make our feature files work, then there’s nothing to stop us taking as many shortcuts as we can to get things working quickly.

For example, if we’re testing the user interface of our system via Cucumber features, one fast way to ensure things are working is to hard code the values in the user interface and not implement the back end behaviour too early. Why not make key pages static in your application, or hard code a few cases so your business gets the rough idea?

Again, you might think that’s cheating, but your features pass, so you’ve delivered what’s been asked for. You’ve spent the time thrashing out the story in a 3 amigos meeting, so you gain the benefits of deliberately discovering your software. You’re giving your colleagues real insight to guide the next set of stories, rather than vague guessing up front. Our UX and design colleagues now have important feedback through a working deployed system very quickly, and quick feedback through working software is a core component of the agile manifesto.

By putting off implementing the whole feature until later, we can use BDD to help us navigate the “chaotic” Cynefin space rather than just the “complicated” space. This in theory makes BDD twice as useful to our business.

Fast, fluid BDD

This all assumes that we have a fast, fluid BDD process, with close collaboration built in. If it takes a week to coordinate everyone for the next feature file, then the temptation is to have a long meeting and go through too many features, without a chance to pause, prototype, deliver and learn from working software. Maybe it’s time to re-organise those desks and sit all the members of your team together, or clean up your remote working practices, or block out time each day for 3 amigo sessions. You’ll be suprised how much small changes speed your team up.

This article was originally published here.

The Wrong Abstraction by Sandi Metz

Published Wed 20 Jan 2016 20:30

I originally wrote the following for my Chainline Newsletter, but I continue to get tweets about this idea, so I'm re-publishing the article here on my blog. This version has been lightly edited.

I've been thinking about the consequences of the "wrong abstraction." My RailsConf 2014 "all the little things" talk included a section where I asserted:

duplication is far cheaper than the wrong abstraction

And in the summary, I went on to advise:

prefer duplication over the wrong abstraction

This small section of a much bigger talk invoked a surprisingly strong reaction. A few folks suggested that I had lost my mind, but many more expressed sentiments along the lines of:

The strength of the reaction made me realize just how widespread and intractable the "wrong abstraction" problem is. I started asking questions and came to see the following pattern:

  1. Programmer A sees duplication.

  2. Programmer A extracts duplication and gives it a name.

    This creates a new abstraction. It could be a new method, or perhaps even a new class.

  3. Programmer A replaces the duplication with the new abstraction.

    Ah, the code is perfect. Programmer A trots happily away.

  4. Time passes.

  5. A new requirement appears for which the current abstraction is almost perfect.

  6. Programmer B gets tasked to implement this requirement.

    Programmer B feels honor-bound to retain the existing abstraction, but since isn't exactly the same for every case, they alter the code to take a parameter, and then add logic to conditionally do the right thing based on the value of that parameter.

    What was once a universal abstraction now behaves differently for different cases.

  7. Another new requirement arrives.
    Programmer X.
    Another additional parameter.
    Another new conditional.
    Loop until code becomes incomprehensible.

  8. You appear in the story about here, and your life takes a dramatic turn for the worse.

Existing code exerts a powerful influence. Its very presence argues that it is both correct and necessary. We know that code represents effort expended, and we are very motivated to preserve the value of this effort. And, unfortunately, the sad truth is that the more complicated and incomprehensible the code, i.e. the deeper the investment in creating it, the more we feel pressure to retain it (the "sunk cost fallacy"). It's as if our unconscious tell us "Goodness, that's so confusing, it must have taken ages to get right. Surely it's really, really important. It would be a sin to let all that effort go to waste."

When you appear in this story in step 8 above, this pressure may compel you to proceed forward, that is, to implement the new requirement by changing the existing code. Attempting to do so, however, is brutal. The code no longer represents a single, common abstraction, but has instead become a condition-laden procedure which interleaves a number of vaguely associated ideas. It is hard to understand and easy to break.

If you find yourself in this situation, resist being driven by sunk costs. When dealing with the wrong abstraction, the fastest way forward is back. Do the following:

  1. Re-introduce duplication by inlining the abstracted code back into every caller.
  2. Within each caller, use the parameters being passed to determine the subset of the inlined code that this specific caller executes.
  3. Delete the bits that aren't needed for this particular caller.

This removes both the abstraction and the conditionals, and reduces each caller to only the code it needs. When you rewind decisions in this way, it's common to find that although each caller ostensibly invoked a shared abstraction, the code they were running was fairly unique. Once you completely remove the old abstraction you can start anew, re-isolating duplication and re-extracting abstractions.

I've seen problems where folks were trying valiantly to move forward with the wrong abstraction, but having very little success. Adding new features was incredibly hard, and each success further complicated the code, which made adding the next feature even harder. When they altered their point of view from "I must preserve our investment in this code" to "This code made sense for a while, but perhaps we've learned all we can from it," and gave themselves permission to re-think their abstractions in light of current requirements, everything got easier. Once they inlined the code, the path forward became obvious, and adding new features become faster and easier.

The moral of this story? Don't get trapped by the sunk cost fallacy. If you find yourself passing parameters and adding conditional paths through shared code, the abstraction is incorrect. It may have been right to begin with, but that day has passed. Once an abstraction is proved wrong the best strategy is to re-introduce duplication and let it show you what's right. Although it occasionally makes sense to accumulate a few conditionals to gain insight into what's going on, you'll suffer less pain if you abandon the wrong abstraction sooner rather than later.

When the abstraction is wrong, the fastest way forward is back. This is not retreat, it's advance in a better direction. Do it. You'll improve your own life, and the lives of all who follow.


Public POOD course coming to San Francisco!

I'm pleased to announce that I'll be teaching a public Practical Object-Oriented Design course in San Francisco on May 9-11, 2016, fondly named POODGATE. And (drum roll) I'm doubly pleased to announce that Avdi Grimm, Head Chef at Ruby Tapas, will be the co-instructor.

This might be your one opportunity to spend three fun-filled days in a room with the two of us. :-) Last fall's New York City course filled very quickly; if you're interested, don't procrastinate. Tickets are on sale now!

99Bottles Book

The beta version of my 99Bottles book will available in February. News, and even better, discounts will be announced on the 99Bottles mailing list. This is a seriously low traffic list, so you will come to no harm if you sign up now.


This article was originally published here.

BDD: A Three-Headed Monster by Liz

Published Mon 14 Dec 2015 19:31

676px-Hercules_and_Cerberus_LACMA_65.37.151Back in Greek mythology, there was a dog called Cerberus. It guarded the gate to the underworld, and it had three heads.

There was a great guy called Heracles (Hercules in Latin) who was a demi-god, which means he would have been pretty awesome if the gods hadn’t intervened to make go mad and kill his entire wife and family. Greek gods in general aren’t very nice, and you wouldn’t want them to visit at Christmas.

Heracles ended up atoning for this with twelve tasks, the last of which was to capture Cerberus himself. (It was meant to be ten tasks, but his managers decided that he had collaborated with someone from another team on one of them, and got paid by an external stakeholder for a second, so they didn’t count.)

Fortunately, it turns out that BDD’s a bit easier to tame than Cerberus, and it works best if you involve other people from the outset.

The first thing we do with BDD is have a bit of a conversation. If I know nothing about a project, I’ll ask someone to tell me about it, and whenever I think it’s useful, I ask the magic question: “Can you give me an example?”

If they aren’t specific enough – for instance, they say that there are these twelve labours they have to do – I’ll get them to be more specific. “Can you give me an example of a labour?” Labour to me means Jeremy Corbyn, not wrestling the Nemean Lion, so having these conversations helps me to find out about any misunderstandings early.

Eventually, we end up with something we can think about in concrete terms. There’s a template that we use in BDD – Given a context, when an event happens, then an outcome should occur – but even without the template, having a concrete example which starts from some context, has an event happen and ends with a desired outcome (or the best possible outcome that we can reasonably expect to happen, at least) is useful. It lets us talk about those scenarios, and ask questions about other scenarios that might exist.

Exploration by Example (what it could do)

I like to ask two questions, the patterns for which I call Context Questioning, and Outcome Questioning. “Is there any other context, which, for the same event, gives us a different outcome?” And, “Is there any other outcome that’s important?”

The first is easy. We can usually think of ways that our outcome might be thwarted, and if we can’t, then our testers can, because testers have that “break all the things!” mindset that causes them to think of scenarios that might go a different way to the way we expect.

The second is especially useful though, because it lets us talk about what other stakeholders might need to be involved, and what they need. This is particularly important if there’s a transaction happening – both stakeholders get what they want from it, or three if you’re buying a product or service via a third party like Uber. Without it, you might end up getting one stakeholder’s desired outcome but missing the other.

“Why did they leave that wooden horse out there anyway?”

“We should ask a tester. Do we have any left that haven’t been eaten by sea-serpents?”

With these two questioning patterns, we can start to explore the scope of our requirements. We now know what the system could do. What should it do? By deciding which scenarios are out of scope, we narrow down the requirements. If we don’t know enough to decide what our scenarios ought to look like, we can either do something to try it out in a way that’s safe-to-fail and get some understanding (useful if nobody in the organization has ever done it before) or we can find an expert to talk to (if they’re available).

If you find you never talk about scenarios which you decide are out-of-scope or irrelevant, either quickly or later on, you may not be exploring enough.

Specification by Example (what it should do)

Now we’ve narrowed it down to what it should do, and we’ve got some scenarios to illustrate those aspects of behaviour, we can start turning the ideas into reality. The scenarios give us a focus, and let us continue to ask questions. If we ever come across an area we’re not familiar with, we can fall back into exploration, until we have understanding and can specify once more what our system should do.

“So, if we frown really fiercely and heavily at Charon, he should let us past to get to Cerberus. What does a really fierce frown look like? Can you give me an example?”

“Well, imagine someone just used the words ‘target velocity’ in your hearing…”

“Oh, like this?

And, once we’ve made our ideas into something real, we can use our scenarios to see whether it does what we thought it should do – a test.

Test by Example (what it does)

We can run through the scenario again, either manually or using automation, to see if it works.

And, ideally, we write the automated version of the test first, so that it gives us a very clear focus and some fast feedback; though if we’re in high-uncertainty and keep having to fall back to exploration, we might want to lay off of that for a bit.

“So, what did happen?”

“Um, I chopped off a head, and it grew back. Twice…”

“Oh, sugar. Hold on… I think Jason ran into the same behaviour last week. I can’t remember whether it was intended or not…”

And that’s it.

Some people really focus on the tools and testing. Others focus on specification. Really, though, it’s exploration that we start with; those thought-experiments that are cheap to change, and not nearly as dangerous as the real thing.

In our software, we’re not even dealing with three-headed dogs or mythical monsters; just people who want things. They aren’t even intent on making it hard for us. Even better, those people often find value in small things, and don’t need us to finish all twelve tasks to have a completed story. It’s pretty easy to explore what they want using examples, then use the results of that conversation as specifications, then as tests.

Even so, if you do manage to tame BDD, with all three of its heads, you’re still pretty awesome.

Just remember that a puppy is not just for Christmas.

This article was originally published here.

Unit tests are your specification by Seb Rose

Published Sat 5 Dec 2015 00:06

Recently a Schalk Cronjé forwarded me a tweet from Joshua Lewis about some unit tests he’d written.

I took a quick look and thought I may as well turn my comments into a blog post. You can see the full code on github.

Comment 1 – what a lot of member variables

Why would we use member variables in a test fixture? The fixture is recreated before each test, so it’s not to communicate between the tests (thankfully).

In this case it’s because there’s a lot of code in the setup() method (see comment 2) that initialises them, so that they can be used by the actual tests.

At least it’s well laid out, with comments and everything. If you like comments – and guess what – I don’t. And they are wrapped in a #region so we don’t even have to look at them, if our IDE understands it properly.

Comment 2 – what a big setup() you have

I admit it, I don’t like setup()s – they move important information out of the test, damaging locality of reference, and forcing me to either remember what setup happened (and my memory is not good) or to keep scrolling to the top of the page. Of course I could use a fancy split screen IDE and keep the setup() method in view too, but that just seems messy.

Why is there so much in this setup()? Is it all really necessary? For every test? Looking at the method, it’s hard to tell. I guess we’ll find out.

Comment 3 – AcceptConnectionForUnconnectedUsersWithNoPendingRequestsShouldSucceed

Ok, so I’m an apostate – I prefer test names to be snake_case. I just find it so much easier to parse.

The name, though, is pretty good. Very descriptive, although I’m still not sure what ShouldSucceed really means.

The problem starts when I try to relate the comment (C):

///GIVEN User1 exists AND User2 exists AND they are not connected AND User1 has requested to Connect to User2

with the name of the test (N), which incorporates the text: WithNoPendingRequests

and the code in the test (T):

Given(user1Registers, user2Registers, user1RequestsConnectionToUser2);
AndEventsSavedForAggregate<User>(user1Id, user1Registered, connectionRequestedFrom1To2, connectionCompleted);
AndEventsSavedForAggregate<User>(user2Id, user2Registered, connectionRequestFrom1To2Received, connectionAccepted);

So, the name (N) says that:

  1. the users should not be connected, and
  2. they (?) should have no pending requests [this is probably a cut and paste error]

The comment (C) says that:

  1. the users should not be connected, and
  2. user1 has requested a connection to user2

And the test (T) says:

  1. nothing explicit about whether they are already connected
  2. nothing explicit about pending requests [although there is a request connection command, which is implicitly pending]
  3. plenty of assertions on events that have nothing directly to do with identifying a successful connection

Why is any of this a problem? Well, in the first place the tests (a.k.a. specification) should be consistent and easy to understand. These tests are way better than many that I see, but still it’s worth thinking about how they can be even better.

And, secondly, tests should only assert on the behaviour that they are actually interested in. Over-specifying a test makes it brittle in the face of change, and one thing we can do without is brittle test suites.

Criticism is easy – is there an alternative?

  1. Use builders to create instances at the point you need them.
  2. Use methods on the builder to express attributes that are important for the behaviour being validated in the test.
  3. Only assert on the event(s) that are directly related to the behaviour being validated in the test (that may require the writing of more tests)

This is one way you could write it in Gherkin, using Roger (the requester) and Andrea (the accepter):

Given Roger and Andrea are not connected
And Roger has made a connection request to Andrea
When Andrea accepts the connection request
Then Roger and Andrea are connected

So here’s my re-write. Note that there are utility classes/methods that would need to be written to allow this to compile:

public void AcceptingConnectionRequestFromUnconnectedUserShouldSucceed
  User roger = userBuilder.withNoConnections().build();
  User andrea = userBuilder.withNoConnections().build();


  assertThatConnectionExistsBetween(roger, andrea);

I’ve dropped the Given/When/Then format, because, though it was concise, I don’t find that it adds much when we’re at the implementation level. In fact, the very cleverness that allows a list of commands or events to be handled by the G/W/T doesn’t work for me – I find that it obscures what’s going on.

Instead, I’ve stuck to old school arrange/act/assert structure delineated by white space. I’ve posited the existence of a user builder object and a user class for use in the test code. Is this an overhead? Sure, but it will get used over and over again, and it localises the behaviour in a single place, so that when the flow of events changes, or new flows are identified, there’s only a single place in the code that needs maintenance.

I’ve also suggested the withNoConnections() method – which will likely be a no-op – to emphasise that being unconnected is important. You may consider this overkill in this situation, since there’s it’s unlikely that two freshly created users will be connected. I prefer to be explicit about these things.

A question I’m still left with is “how should we implement assertThatConnectionExistsBetween()“. My initial thought would be that it would check that the connectionCompleted() event had been recorded, but that really only checks that the event has been emitted, not that the connection has actually taken place. Without digging deeper into the domain it’s hard to know which approach is more appropriate.

This article was originally published here.

The mob rules, ok? by Steve Tooke

Published Tue 1 Dec 2015 00:00

During the last 6 weeks or so, I’ve had the pleasure to be working on Cucumber Pro with the team at Cucumber Limited. One of the key thing making this such a good experience has been the way we’ve been working. Mob Programming.

What is Mob Programming?

All the brilliant people working on the same thing, at the same time, in the same space, and on the same computer — @woodyzuill

Mob Programming is a term coined by Woody Zuill. It describes a practice that he and his team “discovered” while he was coaching at Hunter Industries. It’s a way of working where the whole team gather around a single computer and work on a single problem together. The team take turns to “drive” the computer, while the other members of the team help to think through the problem and find solutions.

A Remote Mob

The Cucumber Pro team works remotely. We are geographically distributed (although we are usually in similar timezones). Obviously this makes sharing a computer more of a challenge, but we’ve found a couple of solutions that are working well for us.

The first thing is that the person driving always works on their computer. This allows everyone to use the tools they are most comfortable with and saves them from them having to deal with lag or other connection problems on input.

To share the driver’s computer with the rest of the team we have mostly used Screenhero. Screenhero allows us to share a single computer with several other participants (I think we’ve had up to 5 or 6). Unlike other screensharing technology it also gives each user a mouse pointer. This is especially useful when trying to point out where that misspelt variable is hiding. Screenhero also allows the navigators to type, which helps from time to time.

While Screenhero does provide a voice channel, we generally prefer to use Google Hangouts for voice and video. Partly because the sound is better, but really because being able to see each other is great!

We haven’t found a really good solution for a shared whiteboard yet. Most of the drawing we’ve done has been on paper and shared with photographs. We’ve also experimented with an iPEVO camera. This lets you share a drawing live as it happens. We’ve used it point to paper on the desktop, and with a whiteboard. This is a bit more of an interactive experience, but it still only allows one person to draw.

Mornings only

We decided that the Cucumber Pro mob would only convene in the mornings. This gives us 3.5 focussed hours where we all work together. These morning mob sessions are where we take design decisions. We discuss the work that’s to be done. Talk through the business, and find examples that we can use to illustrate them in our Cucumber features. Its also in these sessions that we write most of the code.

Afternoons are more free-form. For a start everyone in the team has other responsibilities. So this leaves space for this work. Dealing with email, running a business, open-source, etc.

But… it also leaves space for people to think, to read, to experiment, to fix little niggles, to automate tiresome tasks. This space is invaluable. We liberally use TODOs while we are mobbing. We use them in the same way we might note something we want to address later on an index card. Fixing TODOs in the afternoon has been quite common. Sometimes this is just tidying up and getting work out of the way, so the mob can focus on bigger tasks. Sometimes this is a spike to try out some idea before presenting it back to the mob.

Pull requests

We use Github’s pull requests in a couple of different ways. Firstly, any work that people undertake outside of the mob (in the afternoon), is almost always done on a pull-request. This allows us to use GitHub as the communication channel about the code, and it means that work that is done indivdually is seen by someone else before its merged.

We have also been using pull requests for work-in-progress. Not everyone on the Cucumber Pro team is available everyday. There’s often someone away delivering training or consulting, or at a conference. Again pull requests let us use Github’s great tooling for seeing changes to the code over time, and having asynchronous discussions with those who weren’t able to join the mob.

Daily retrospective

We end every mob session with a short retrospective. We ask ourselves two questions:

  • What have we learnt?
  • What puzzles us?

We use this as a chance to reflect on the work we have done, and how things went. We try to recognise things that have gone well so we can do more of them, and recognise problems early so that we can head them off.

We also spend a few minutes thinking about the next steps, where the mob’s focus should go next.

We write all of this up in a file at the root of the project and commit it to the repository. This is helpful for the team members that weren’t in the mob session. It helps to share what we’ve learnt and our questions with them. It also marks where the mob finished that day.

We’re currently adding each retrospective at the top of a single file, and maintaining a history. I’m confident that it will be useful to reflect back on how our thoughts and feelings about the project change over time.


Mob programming is a great way to build a team. I feel that we get a real sense that we’re working together towards a common goal. We solve problems together. We learn together and we teach each other. By reflecting on each session, we learn more about how each of us likes to work, and how we can all help each other.

The remote working lets us all be comfortable in our surroundings. We’ve had Matt join for a few hours while he’s been in Australia. The last couple of days Aslak has been in the mob, with his new baby nestled in a sling — there is something really calming about hearing contented baby gurgles while your working.

Remote collaboration is quite an intense way to work. I’ve done quite a lot of remote pair programming and it can be quite draining. Keeping the afternoons free really helps to combat this.

Working in the mob everyday is fantastic. I look forward to them because they’re fun, and I feel like we’re growing as a team every day — but the afternoon space is just as important.

This article was originally published here.

Capabilities and Learning Outcomes by Liz

Published Thu 17 Sep 2015 17:37

When I started training, I taught topics. Lots of topics!

Nowadays, thanks to some help from Marian Willeke and her incredible understanding of how adults learn, I get to teach capabilities instead. It’s much more fun. This is how I do it.

First off, because I’m into BDD and hypnosis, I sit and imagine some scenarios in which people actually use the learning I’ve given them. Maybe they’re the Product Owner of a Scrum Team, or using BDD for the first time, or they have a good understanding of Agile, and now they’re learning how to coach. I watch them in my head and look at what they do, or I think about what I’ve done, in similar situations.

As with all scenarios, the event that’s happening requires capabilties; the ability to do something, and do it well.

So, for instance, I imagine a team sitting together in a huddle, talking through BDD’s scenarios. Well, you’ll need to be able to use the different strengths of the different roles. And you’ll need to be able to construct well-formed scenarios, and to differentiate between acceptance criteria and a specific example.

If I get stuck thinking about what capabilities I need to teach, I go look at Bloom’s Taxonomy, and the Revised Cognitive Domain – I really like Don Clark’s site. Marian gives some advice; when you’re teaching adults, aim higher than merely remembering; give them something they can actually do with it. The keywords help me to think about the level of expertise that the learners will need to get to (though I don’t always stick to them).

So for instance, I end up with capabilities like these:

  • Explain BDD and its practices
  • Apply shortcuts to well-understood requirements to reduce analysis and planning time
  • Identify core and incidental stakeholders for a project

If I’m training, I use these in conjunction with a bit of teaching, then games or exercises that help attendees really experience the things they’re able to do for the first time, and give me a chance to help them if I see they need it. The learning outcomes make a great advert for the course, too! And I use them as a backlog while I’m running the course, so I always know what’s done and what’s next.

More recently, I’ve been using this technique to put together documents which serve the same purpose for people I can’t train directly. I put the learning outcomes at the start: “When you’ve read this, you will be able to…” It’s fun to relate the titles of each section back to the outcomes at the beginning! And, of course, each capability is an embedded command to someone to actually try out the new skill.

Best of all, each capability comes with its own test. As the person writing the course or document, I can think to myself, “If my student goes on this course or reads this document, will they be able to do  this thing?”

And, if they do actually take the course, I can ask them directly: “Do you feel confident now about doing this thing?” It gives me a chance to go over material if I have time, or to offer follow-up support afterwards (which I generally offer with all my courses, anyway).

You can read more about Bloom’s Taxonomy, and see the backlog for one of my BDD courses, on Marian’s site.

Now you should be able to create courses using capabilities, instead of topics. Hopefully you really want to, as well… but the Affective Domain, and what you can do with it, is a topic for another post.

This article was originally published here.

Hands-on With the Cucumber Events API by Steve Tooke

Published Mon 14 Sep 2015 00:00

Cucumber Ruby 2.1 introduces the new Events API — a simple way to find out what’s happening while Cucumber runs your features. Events are read-only and simplify the process of writing formatters, and other output tools.

I’ll illustate how to use the API with a worked example that streams Cucumber test results to a browser in real-time.

Can you give me an example?

As much as we love our console applications, we can get a much richer experience in a web browser. How could we get Cucumber to push information into a nice web UI, without losing the rich information available with the built-in formatters?

Let’s build a super-simple example using the Events API that uses a websocket to update a web page while cucumber is running.

There’s lots of ways to run a websocket server – a favourite of mine is to use websocketd because it’s super simple. Give it an executable that reads STDIN and write STDOUT and you’re done!

For our very simple websocket reporter we are going to use a UNIX named pipe to push information out of our cucumber process. To get these events out onto a websocket we need a shell command that reads from a named pipe and echos back onto STDOUT.



[ -p $fifo_name ] || mkfifo $fifo_name;

while true
  if read line <$fifo_name; then
    echo $line

Make sure the script is executable with chmod +x

When you run it, it will create an events named pipe if one doesn’t exist already, then wait until there is data on the pipe for it to read. We can see it in action by putting some data on to the pipe: echo "hello, world" > events.

Writing Cucumber Events to the pipe

Let’s start by asking cucumber to write messages to the pipe. Add the following to features/support/env.rb

EVENT_PIPE = "events"
unless File.exist?(EVENT_PIPE)
  `mkfifo #{EVENT_PIPE}`

publisher =, "w+")
publisher.sync = true
publisher.puts "started"

at_exit {
  publisher.puts "done"

This doesn’t use the Events API yet, but we’ve got the plumbing in place now to write to the same named pipe as will read from. With up and running, you should be able to run cucumber and see started and done output to the terminal by

For our simple web-browser cucumber reporter we want to show each step that cucumber runs, and its result. We want cucumber to tell us when it starts to execute, when it starts to run each step, when it finishes a step (and what the result was) and when it’s finished executing.

We’ll send some formatted JSON that give us some information about the events:

  "event": "event_name",
  "data": {} //information about the event

We can modify features/support/env.rb to give us the start and end events:

require 'json'

EVENT_PIPE = "events"
unless File.exist?(EVENT_PIPE)
  `mkfifo #{EVENT_PIPE}`

publisher =, "w+")
publisher.sync = true
publisher.puts({event: "started", data: {}}.to_json)

at_exit {
  publisher.puts({event: "done", data: {}}.to_json)

The Cucumber Events API gives us access to what’s going on inside Cucumber while it’s running our features. We want to know when a step is going to be run, and what happened when it finished. Cucumber provides us the BeforeTestStep and AfterTestStep events. To hear about these events we can use the cucumber AfterConfiguration hook to get access to the current config, and add handlers for specific events with the on_event method:

AfterConfiguration do |config|
  config.on_event :before_test_step do |event|

  config.on_event :after_test_step do |event|

Putting this all together we can modify features/support/env.rb to push these events out onto our named pipe too:

require 'json'

EVENT_PIPE = "events"
unless File.exist?(EVENT_PIPE)
  `mkfifo #{EVENT_PIPE}`

publisher =, "w+")
publisher.sync = true

AfterConfiguration do |config|
  publisher.puts({event: "started", data: {}}.to_json)

  config.on_event :before_test_step do |event|
        event: "before_test_step",
        data: {}

  config.on_event :after_test_step do |event|
        event: "after_test_step",
        data: { result: event.result.to_s }

at_exit {
  publisher.puts({event: "done", data: {}}.to_json)

Now if you run Cucumber, with up and running you should see something like:

$ ./

Hooking up a WebSocket

Great! We’ve got Cucumber sending our events. We now want to get these events pushed into a web-page using a websocket.

websocketd lets us hook our command up to a websocket. Let’s have a look at what happens using websocketd’s devconsole mode:

$ websocketd --port=8080 --devconsole ./

Then point your browser to [http://localhost:8080] and you should see:

Clicking the little “✔” in the top left will connect the websocketd’s dev console to the running socket. Now if you echo some text on to the named pipe, you will see it appear in the console on the web browser. Now running Cucumber again, you should see something like this in the web browser:

WebSocket Cucumber

Finishing everything up, lets create a simple web-page that uses the websocket to get information from Cucumber as it’s running. Save this as index.html:

<!DOCTYPE html>
<h1>Cucumber Runner</h1>
<p id="status">disconnected</p>
<div id="runner"></div>
  // helper function: log message to screen
  function stepStarted() {
    var runner = document.getElementById("runner");
    var resultNode = document.createElement("span");
    resultNode.textContent = "*";

  function stepResult(result) {
    var resultNode = document.getElementById("runner").lastElementChild;
    resultNode.textContent = result;

  function clearRunner() {
    document.getElementById('runner').innerHTML = "";

  function statusWaiting() {
    document.getElementById('status').textContent = "waiting";

  function statusRunning() {
    document.getElementById('status').textContent = "running";

  function statusDisconnected() {
      document.getElementById('status').textContent = "disconnected";

  function done() {

  var CucumberSocket = function() {
    var ws = new WebSocket('ws://localhost:8080/');
    var callbacks = {};

    this.on = function(event_name, callback){
      callbacks[event_name] = callback;
      return this;

    var dispatch = function(event_name, message){
      var callback = callbacks[event_name];
      if(typeof callback == 'undefined') return;

    ws.onmessage = function(event){
      var json = JSON.parse(

    ws.onclose = function(){dispatch('close',null)}
    ws.onopen = function(){dispatch('open',null)}

  var cucumber = new CucumberSocket();

  cucumber.on('open', statusWaiting);

  cucumber.on('close', statusDisconnected);

  cucumber.on('started', function() {

  cucumber.on('before_test_step', function(data) {

  cucumber.on('after_test_step', function(data) {

  cucumber.on('done', function() {

Using websocketd’s static site server we can get our little web page up and running: websocketd --port=8080 --staticdir=. ./ and open http://localhost:8080. Now running Cucumber should show you progress in the web page!

Cucumber Websocket

What events are available?

Cucumber 2 introduced a new model for executing a set of features. Each scenario is now compiled into a suite of Test Cases, each made up of Test Steps. Test Steps include Before and After hooks. Cucumber fires the following 5 events based on that model.

What can I use it for?

The Events API is there for getting information out of Cucumber. It’s going to be the best way to write new formatters in future — the old formatter API will be removed in Cucumber 3.0. If you’re looking for a way to contribute to Cucumber then rewriting some of the old formatters to use the new events API would be a tremendous help.

Any questions please come and join us on our gitter channel or the mailing list. All the code for this blog post is available here.

This article was originally published here.

On Epiphany and Apophany by Liz

Published Wed 9 Sep 2015 11:14

We probe, then sense, then respond.

If you’re familiar with Cynefin, you know that we categorize the obvious, analyze the complicated, probe the complex and act in chaos.

You might also know that those approaches to the different domains come with a direction to sense and respond, as well. In the ordered domains – the obvious and complicated, in which cause and effect are correlated – we sense first, then we categorize or analyze, and then we respond.

In the complex and chaotic domains, we either probe or act first, then sense, then respond.

Most people find action in chaos to be intuitive. It’s a transient domain, after all; it resolves itself quickly, and it might not resolve itself in your favour… and is even less likely to do so if you don’t act (the shallow dive into chaos notwithstanding). We don’t sit around asking, “Hm, I wonder what’s causing this fire?” We focus on putting the fire out first, and that makes sense.

But why do we do this in the complex domain? Why isn’t it useful to make sense of what we’re seeing first, before we design our experiments?

As with many questions involving human cognition, the answer is: cognitive bias.

We see patterns which don’t exist.

The term “epiphany” can be loosely defined as that moment when you say, “Oh! I get it!” because you’ve got a sudden sense of understanding something.

The term “apophany” was originally coined as a German word for the same phenomenon in schizophrenic experiences; that moment when a sufferer says, “Oh! I get it!” when they really don’t. But it’s not just schizophrenics who suffer from this. We all have this tendency to some degree. Pareidolia, the tendency to see faces in objects, is probably the best-known type of apophenia, but we see patterns everywhere.

It’s an important part of our survival. If we learn that the berry from that tree with those type of leaves isn’t good for us, or to be careful of that rock because there are often snakes sunning themselves there, or to watch out for the slippery moss, or that the deer come down here to drink and you can catch them more easily, then you have a greater chance of survival. We’re always, always looking out for patterns. In fact, when we find them, it’s so enjoyable that this pattern-learning, and application of patterns in new contexts, forms the heart of video games and is one reason why they’re horribly addictive.

In fact, our brains reward us for almost seeing the pattern, which encourages us to keep trying… and that’s why gambling is also addictive, because a lot of the time, we almost win.

In the complex domain, cause and effect can only be understood in retrospect.

This is pretty much the definition of a complex domain; one in which we can’t understand cause and effect until after we’ve caused the effect. Additionally, if you do the same thing again and again in a complex domain, it will not always have the same effect each time, so we can’t be sure of which cause might give us the effect. Even the act of trying to make sense of the domain can itself have unexpected consequences!

The problem is, we keep thinking we understand the problem. We can see the root causes. “Oh! I get it!”… and off we blithely go to “fix” our systems.

Then we’re surprised when, for instance, complexity reasserts itself and making our entire organization adopt Scrum doesn’t actually enable us to deliver software like we thought it would (though it might cause chaos, which can give us other opportunities… if we survive it).

This is the danger of sensing the problem in the complex domain; our tendency to assume we can see the causes that we need to shift to get the desired effects. And we really can’t.

The best probes are hypothesis-free.

Or rather, the hypothesis is always, “I think this might have a good impact.” Having a reasonable reason for thinking this is called coherence. It’s really hard, though, to avoid tacking on, “…because this will be the outcome.” In the complex domain, you don’t know what the outcome is going to be. It might not be a good outcome. That’s why we spend so much time making sure our probes are safe-to-fail.

I’ve written a fair bit on how to use scenarios to help generate robust experiments, but stories – human tales of what’s happening or has happened – are also a good way to find places that probes might be useful.

Particularly, if you can’t avoid having a hypothesis around outcomes (and you really can’t), one trick you can try is to have multiple outcomes. These can be conflicting, to help you check that you’re not hung up on any one outcome, or even failure outcomes that you can use to make sure your probe really is safe-to-fail.

Having multiple hypotheses means we’re more likely to find other things that we might need to measure, or other things that we need to make safe.

I really love Sensemaker.

Cognitive Edge, founded by Dave Snowden of Cynefin fame, has a really lovely bit of software called Sensemaker that collects narrative fragments – small stories – and allows the people who write those stories to say something about their desirability using Triads and Dyads and Stones.

Because we don’t know whether a story is desirable or not, the Triads and Dyads that Sensemaker uses are designed to allow for ambiguity. They usually consist of either two or three things that are all good, all bad or all neutral.

For instance, if I want to collect stories about pair programming, I might use a Dyad which has “I want to pair-program on absolutely everything!” at one end, and “I don’t want to pair-program on anything, ever,” at the other. Both of those are so extreme that it’s unlikely anyone wants to be right at either end, but they might be close. Or somewhere in the middle.

In CultureScan, Cognitive Edge use the triad, “Attitudes were about: Control, Vulnerability, or Indifference.” You can see more examples of triads, together with how they work, in the demo.

If lots and lots of people add stories, then we start seeing clusters of patterns, and we can start to think of places where experiments might be possible.

A fitness landscape from Cognitive Edge shows loose and tightly-bound clusters, together with possible directions for movement.

A fitness landscape from Cognitive Edge

In the fitness landscapes revealed by the stories, tightly-bound clusters indicate that the whole system is pretty rigidly set up to provide the stories being seen. We can only move them if there’s something to move them to; for instance, an adjacent cluster. Shifting these will require big changes to the system, which means a higher appetite for risk and failure, for which you need a real sense of urgency.

If you start seeing saddle-points, however, or looser clusters… well, that means there’s support there for something different, and we can make smaller changes that begin to shift the stories.

By looking to see what kind of things the stories there talk about, we can think of experiments we might like to perform. The stories though have to be given to the people who are actually going to run the experiments. Interpreting them or suggesting experiments is heading into analysis territory, which won’t help! Let the people on the ground try things out, and teach them how to design great experiments.

A good probe can be amplified or dampened, watched for success or failure, and is coherent.

Cognitive Edge have a practice called Ritual Dissent, that’s a bit like the “Fly on the Wall” pattern, but is done in a pretty negative way, in that the group to whom the experiment is being presented critiques it against the criteria above. I’ve found that testers, with their critical, “What about this scenario?” mindsets, can really help to make sure that probes really are good probes. Make sure the person presenting can take the criticism!

There’s a tendency in human beings, though, to analyze their way out of failure; to think of failure scenarios, then stop those happening. Failure feels bad. It tells us that our patterns were wrong! That we were suffering from apophany, not epiphany.

But we don’t need to be afraid of apophany. Instead of avoiding failure, we can make our probes safe-to-fail; perhaps by doing them at a scale where failure is survivable, or with safety nets that turn commitments into options instead (like having roll-back capability when releasing, for instance), or – my favourite – simply avoiding the trap of signalling intent when we didn’t mean to, and instead, communicating to people who might care that it’s an experiment we want to try.

And that it might just make a difference.

This article was originally published here.

POODNYC 2015 Scholarships have been awarded by Sandi Metz

Published Wed 26 Aug 2015 12:00

Scholarships for the Oct 19-21 Practical Object-Oriented Design Course (POODNYC) in New York City have been awarded! Winners are listed below, but before I introduce them I'd like to give an overview of the applicant pool and selection process.

I'll be awarding scholarships for future public classes and hope that transparency about how this works will motivate you into talking some deserving person into applying, or into applying for your own deserving self.

The POODNC Scholarship

The scholarship includes a seat in POODNYC, and airfare to and lodging in NYC (all courtesy of Hashrocket, to whom I am very grateful). As you can see, it's a full ride. The intent was to remove every financial barrier that would prevent the recipient from attending.


There were 24 applicants.

Experience Level:

  • 16 - less than 1 year of experience or currently in school / attending bootcamp
  •  8 - 1+ years
  •  8 - career changers


  • 19 - women
  •  5 - men

Minorities/Under Represented:

  • 13 - people of color (4 men, 9 women)
  •  3 - women over 35


  • 11 - New York
  •  7 - Other states
  •  6 - International (Ecuador, England and Germany)

Selection Criteria

We (me and 2 others) knew that we wanted these scholarships to support good works and/or diversity. The first time we gave scholarships (for the Oct 2014 POODNC course) we supplied very minimal instructions and relied on each person to argue their best case. These instructions merely said 'Tell us why you deserve a scholarship'. This year we added a additional field for ‘Amount of Programming Experience’; more on this below.

While we don't have a rigid checklist by which to evaluate applications, we definitely look favorably upon candidates who:

  1. are engaged in good works
  2. have a moderate amount of programming experience
  3. are demographically diverse from the community at large
  4. have a clear financial need

Good Works:

We preferred candidates with a demonstrated track record of good works, where we define 'good work' as anything from "I spend my spare time on 'Code for America' projects" to "I volunteer as a coach at RailsBridge, RailsGirls, BlackGirlsCode". We preferred candidates who could say "This scholarship will help me do a better job at this thing I am already doing" over candidates who said "This scholarship will make me better".

This year there were so many applicants giving back to the community that we gave additional weight to this criteria.


We required some amount of real-world programming experience, 'some' being defined as 'more than a bootcamp'. Folks with very little programming experience have successfully taken this course, but more experienced programmers get correspondingly more out of it. The scholarships are intended as levers to support change; requiring at least 6 months (ish) of real-world programming experience moves the fulcrum and makes each scholarship have more value.

There were a number of applicants who, although they were engaged in all kinds of good works, didn't yet have enough experience to qualify for a scholarship. We regretfully removed them from consideration and urge them to reapply in the future.


We believe that human diversity improves both the software we create and the community in which we work. We were biased towards candidates who differed from the demographic norm (i.e., in age, ethnicity, gender, etc).

Financial need:

While last year we required folks to demostrate a clear financial need, this year a few candidates were engaged in such impressive good works that we were tempted to ignore this criteria. As a result of this experience, we are officially softening our stance on financial need. Although we will continue to take ability to pay into consideration, future scholarship applicants will not be disqualified based on ability to pay.

This year we did not disqualify a single candidate based on our assemement of their ability to pay.

As I said above, we didn't explicitly ask for financial, experience, demographic or good works information but it was easy to get. Many applicants actually included it on their submission and simple web searches unearthed the missing bits.


A 35-something woman of color who was in the midst of career transition while organizing a community meetup and hosting hack-a-thons would rank very high by the criteria above, and a 20-something Caucasian male who was employed as a junior developer, well, not so much.

The applicants for POODNYC were doing so much good for the community that we added ‘Good Works’ to ‘Experience Level’ in order to stay in the running. Applications which survived those tests went on to be evaluated based on ‘Diversity’ and ‘Financial Need’. We narrowed the list from 24 to five finalists (interestingly, like last year, the demographics of the five matched those of the whole), and then selected the two winners.

The Winners Are

Charlotte Chang

Charlotte is a career changer and a recent programming convert. Upon graduating from Flatiron School, she tried freelancing only to discover that "it takes a village to raise a junior developer". She lives in Cleveland, a former industrial powerhouse which has lost 50% of its manufacturing jobs since 1954, and which had a median household income of $26,096 in 2013. Living in a community which needs to shift its focus to building technology skills led Charlotte to give back.

As part of her efforts to refresh the rustbelt, Charlotte is an organizer of Cleveland Agile Group (CleAg) and is very involved in Make on the Lake, an Internet of Things meetup. She has volunteered for Cleveland Give Camp and Canalway Partners*. Charlotte will also be speaking to future technologists at We Can Code It.

Charlotte doesn't want to be just a 'developer', she wants to be a great developer, one with strong coding practices who gives back to the community and becomes a mentor for others. Charlotte currently works at LeanDog.

* No, she did not receive a scholarship because she volunteers for the bike path. This is pure coincidence.

Richard Lau

Richard is a combat veteran who is passionate about helping other veterans transition their career after their service is complete. He is currently building a free program to help veterans, along with their spouses and children, learn programming. Richard is also an advocate for veteran entrepreneurship.

He attended the Web Development Immersive at General on a Veteran Scholarship. After graduating from the program he started volunteering for RailsBridge NYC. Richard plans to use the course to improve his skills so he can be a better and more knowledgable teacher. He also serves as a co-organizer of the New York VIM meetup.

Richard lives in New York City.

We Want You

As you can see, a candidate who is engaged in good works and is an outlier in every demographic category would be unbeatable, but qualifying on even a subset of these criteria can win you a scholarship. If you, or someone you know, fills the bill, I hope to see your application in the future.

A number of applicants were actively doing good works but did not yet have enough experience to get the most out of the course. If you're one of these folks, I urge you to re-apply in the future. One of today's winners is just such a second time applicant, and proof that additional experience combined with persistence can pay off.

So there you have it ... the POODNYC scholarship winners. Please join me in extending my congratulations to Richard and Charlotte. Our community is improved by their presence. I'm grateful that they're here and gratified to support them along their way.

In closing, one more shout-out to Hashrocket. Their continuing support of POOD course scholarships is a sign of their ongoing commitment to our community and reflects their core values. My thanks.

Sign up for my newsletter, which contains random thoughts that don't quite make it into blog posts.

This article was originally published here.

Negative Scenarios in BDD by Liz

Published Fri 19 Jun 2015 14:04

One problem I hear repeatedly from people is that they can’t find a good place to start talking about scenarios.

An easy trick is to find the person who fought to get the budget for the project (the primary stakeholder) and ask them why they wanted the project. Often they’ll tell you some story about a particular group of people that they want to support, or some new context that’s coming along that the software needs to work in. All you need to do to get your first scenario in that situation is ask, “Can you give me an example?”

When the project or capability is about non-functionals such as security or performance, though, this can be a bit trickier. =

I can remember when we were talking to the Guardian editors about performance on the R2 project. “When (other national newspaper) went live with their new front page,” one editor explained, “their site went down for 3 days under the weight of people coming to look at the page. We don’t want that to happen.”

Or, as another organization said, “We went live, and it crashed. It took three months to get the site up and running again. The code was so awful we couldn’t fix it.”

These kind of negative stories are often drivers, particularly when there are non-functionals involved. You can always handle the requirements through monitoring instead of testing, but the conversation can’t follow the usual, “Can you give me an example?” pattern, because all the examples are things that people don’t want.

Instead, keep that negativity, and ask questions like, “What performance would we need to have to avoid that happening to us? Do we have a good security strategy for avoiding the hacking attempt that ended up with (major corporation)’s passwords getting stolen? How do we make sure we don’t crash when we go live?” Keep the focus on the negatives, because that’s what we want to avoid.

When you come to write the scenarios down, whether it’s in terms of monitoring or a test, it’s often worth keeping that negative around. You can create positive scenarios to look at the monitoring boundaries, but the negative reminds people why they’re doing this.

Given we’ve gone live with the front page
When Tony Blair resigns on the same day
Then the site shouldn’t go down under the weight of people reading that news.

Remember that if you’re the first people to solve the problem then you’ll need to try something out, but if it’s just an industry standard practice, make sure you’ve got someone on the team who’s implemented it before.

Part of the power of BDD’s scenarios is that they provide examples as to why the behaviour is valuable. You’ll need to convert this to positive behaviour to implement it, but if avoiding the negatives is valuable, include those too, even it’s just text on a wiki or blurb at the top of a feature file… and don’t be afraid to start there. Negative scenarios are hugely powerful, especially since they often have the most interesting stories attached to them.

This article was originally published here.