Blog

The very latest thinking from us on BDD, Continuous Delivery, and more.

How do you terminate a project in your org? by Liz

Published Mon 18 Jul 2016 08:46

We all know that when we do something new, for the first time, we make discoveries; and all software projects (and in fact change efforts of any variety) target something new.

(You can find out what that is by asking, “What will we be able to do when this is done that we can’t do right now? What will our customers, our staff or our systems be able to do?” This is the differentiating capability. There may be more than one, especially if the organization is used to delivering large buckets of work.)

Often, though, the  discoveries that are made will slow things down, or make them impossible. Too many contexts to consider. Third parties that don’t want to co-operate, let alone collaborate. A scarcity of skills. Whatever we discover, sometimes it becomes apparent that the effort is never going to pay back for itself.

Of course, if you’ve invested a lot of money or time into the effort, it’s tempting to keep throwing more after it: the sunk-cost fallacy. So here are three questions that orgs which are resilient and resistant to that temptation are able to answer:

  1. How can you tell if it’s failing?
  2. What’s the process for terminating or redirecting failing efforts?
  3. What happens to people who do that?

If you can’t answer those questions proudly for your org, or your project, you’re probably over-investing… which, on a regular basis, means throwing good money after bad, and wasting the time and effort of good people.

Wouldn’t you like to spend that on something else instead?

 

 


This article was originally published here.

Make Everything The Same by Sandi Metz

Published Thu 9 Jun 2016 12:23

This post originally appeared in my Chainline Newsletter. Due to popular request, I'm re-publishing it here on my blog. It has been lightly edited.

As part of my local ruby meetup (#westendruby), I've been dabbling in katas and quizzes. Having worked several, I can't help but notice that my solutions are sometimes radically different from the others.

Hmmm.

Having one's solutions differ from the crowd's is (and ought to be) a cause for heightened scrutiny. Therefore, I've been pondering code, trying to understand what's driving my choices. I have a glimmer of an idea, and thus, this newsletter.

The Setup

The easiest way for me to explain is for you to first go do the Roman numerals kata. Happily, there's a Roman numerals test on exercism to get you started. The task is to convert Arabic numbers into Roman numerals, and the tests are all some form of:

assert_equal 'I', 1.to_roman

or

assert_equal 'CMXI', 911.to_roman

In case your life is such that you can't drop everything and do that exercise right now, here's a reminder of how the Roman counting system works. There are two main ideas. First, a few specific numbers are represented by letters of the Roman alphabet. Next, these letters can be combined to represent other numbers.

Seven Roman letters are used. The letters and their associated values are:

  • I = 1
  • V = 5
  • X = 10
  • L = 50
  • C = 100
  • D = 500
  • M = 1,000

1-10 in the Arabic numbering system is I, II, III, IV, V, VI, VII, VIII, IX, and X in Roman numerals. As you may already know, there are two rules at work.

1, 2 and 3 illustrate the first rule. 1 is one I. 2 is two Is. 3 is three Is. The rule is: for Arabic value x, select the largest Roman letter that's less than x, and generate x number of copies. Let's call this the 'additive' rule.

4 follows the second rule. Instead of IIII (four I's), 4 is written as IV (one less than five). This rule is: select the first Roman letter higher than the Arabic value, and prefix it with the adjacent lower letter. This rule is used in cases where 4 sequential occurrences of any letter would otherwise be necessary. Thus, 4 is IV instead of IIII, 9 is IX instead of VIIII, etc. Let's call this the 'subtractive' rule.

Now, consider the code needed to satisfy this kata. Given that there are two rules, it seems as if there must be two cases. To handle the two cases, it feels like the code will need a conditional that has two branches, one for each rule.

The actual implementation code might be more procedural (the conversion logic could be hard-coded into the branches of the conditional) or more object-oriented (you could create an object to handle each rule and have a conditional somewhere to select the correct one), but regardless of whether you write a procedure or use OO composition, there's still a conditional.

The Insight

I hated this. Not only did I not want the conditional, but figuring out when to use which rule seemed like a royal PITA. It felt like the conditional would need to do something clever to select the correct rule, and I wasn't feeling particularly quick-witted. Thus, I found myself pondering this kata with a faint sense of dread, while the meetup loomed.

Regardless, I sat down to write some code, and immediately realized that although I was faintly aware that there were two conversion rules, I didn't know the full set of Roman letters and their associated Arabic values. I then consulted the wikipedia page for Roman numerals, where I found something which gave me a dramatically simpler view the problem. Serious lightbulb moment.

It turns out that the way we think about Roman numerals today is the result of an evolutionary process. In the beginning, they were uniformly additive. 4 was written as IIII and 9, VIIII. As time passed, the subtractive form crept in. 4 became IV, and 9, IX. In modern times we consistently use the shorter, subtractive form, but in Roman times it was common to see one form, or the other, or a combination thereof.

This means that the additive form is a completely legitimate kind of Roman numeral. (Who knew?) It can be produced in its entirety by rule 1, which is comfortingly simple to implement. The conversion from additive to subtractive is also dead easy, and can be accomplished via a simple mapping that encodes rule 2.

The key insight here is that converting from Arabic to additive Roman is one idea, and converting from additive to subtractive Roman is quite another. Solving this kata by converting Arabic numbers directly into subtractive Roman skips a step, and conflates these two ideas. It is this conflation that dooms us to the conditional.

Having had this realization, I wrote two simple bits of code. One converted Arabic to additive Roman, the other additive to subtractive Roman. Used in combination, they pass the tests.

I took the code to #westendruby, where someone pointed out that not only was my variant more understandable than many other implementations, but also that it could easily be extended to perform the reverse conversion. They were absolutely right; it took just a few lines of additional code to convert from Roman numerals back into Arabic numbers. Adding this new feature to other implementations was far more difficult.

I wrote several versions of the kata. Here's the one I ended up liking the best.

The Upshot

I left that meetup with a newfound respect for what it means to have a conditional.

Conditionals are trying to tell you something. Sometimes it is that you ought to be using composition, i.e., that you should create multiple objects that play a common role, and then select and inject one of these objects for use in place of the conditional. Composition is the right solution when a single abstract concept has several concrete implementations.

However, rule 1 and rule 2 above don't represent alternative implementations of the same concept, instead they represent two entirely unrelated ideas. The solution here is therefore not composition, but instead to create two transformations, and apply them in order. This lets you replace one "special" case with two normal ones, and reap the following benefits:

  • The resulting code is more straightforward.
  • The tests are more understandable.
  • The code can produce the pure additive form of Roman numerals, in addition to the subtractive one.
  • The code is easily extended to do the reverse conversion.

The keystone in this arch of understanding is being comfortable with transformations that appear to do nothing. It is entirely possible for a Roman numeral to look identical in its additive and subtractive forms. III for example, looks the same either way. Regardless, the additive III must be permitted to pass unhindered through the transformation to subtractive. You can't check to see if it needs to be converted, instead you must blithely convert it. This makes everything the same, and it is sameness that gets rid of the conditional.

The Commentary

Now, if you'll permit, I'll speculate. I'm interested in why this solution occurred to me, but not others. Folks at the meetup found it startling in its simplicity and utility. Once known, it seems inevitable, but before knowing, inconceivable.

What would someone have to know in order to be able to dream up this solution? How can we teach OO so that folks learn to look at similar problems and recognize the underlying concepts? What quality in my background or style of thinking revealed them to me? Mind you, I'm not saying that my solution is perfect, but it's certainly different. Why?

I think there are two reasons. First, I'm committed to simplicity. I believe in it, and insist upon it. I am unwilling to settle for complexity. Simplicity is often harder then complexity, but it's worth the struggle, and everything in my experience tells me that with enough effort, it's achievable. I have faith.

Next, the desire for simplicity means that I abhor special cases. I am willing to trade CPU cycles to achieve sameness. I'll happily perform unnecessary operations on objects that are already perfectly okay if that lets me treat them interchangeably. Code is read many more times that it is written, and computers are fast. This trade is a bargain that I'll take every time.

So:
Insist on simplicity.
Resist special cases.
Listen to conditionals.
Identify underlying concepts.
And search for the abstractions that let you treat everything the same.

Thanks for reading,

Sandi

News:

Public POOD course coming to New York City!

I'm pleased to announce that I'll be teaching a public Practical Object-Oriented Design course in New York City on Oct 31-Nov 2, 2016, fondly named POODNYC. And I'm also delighted that Avdi Grimm, Head Chef at Ruby Tapas, will again be the co-instructor.

Tickets are on sale now!

99Bottles Book

My new 99 Bottles book is out in private beta, and is slated for general release Any Day Now ™. News and discounts will be announced on the 99Bottles mailing list. This list has almost zero traffic, so you won't hate yourself if you sign up today.

This article was originally published here.

On Learning and Information by Liz

Published Tue 31 May 2016 15:32

This has been an interesting year for me. At the end of March I came out of one of the largest Agile transformations ever attempted (still going, surprisingly well), and learned way more than I ever thought possible about how adoption works at scale (or doesn’t… making it safe-to-fail turns out to be important).

The learning keeps going. I’ve just done Sharon L. Bowman’s amazing “Training from the Back of the Room” course, and following the Enterprise Services Planning Executive Summit, I’ve signed up for the five-day course for that, too.

That last one’s exciting for me. I’ve been doing Agile for long enough now that I’m finding it hard to spot new learning opportunities within the Agile space. Sure, there’s still plenty for me to learn about psychology,  we’re still getting that BDD message out and learning more all the time, and there’s occasional gems like Paul Goddard’s “Improving Agile Teams” that go to places I hadn’t thought of.

It’s been a fair few years since I experienced something of a paradigm shift in thinking, though. The ESP Summit gave that to me and more.

Starting from Where You Are Now

Getting 50+ managers of MD level and up in a room together, with relatively few coaches, changes the dynamic of the conversations. It becomes far less about how our particular toolboxes can help, and more about what problems are still outstanding that we haven’t solved yet.

Of course, they’re all human problems. The thing is that it isn’t necessarily the current culture that’s the problem; it’s often self-supporting structures and systems that have been in place for a long time. Removing one can often lead to a lack of support for another, which cascades. Someone once referred to an Agile transformation at a client as “the worst implementation of Agile I’ve ever seen”, and they were right; except it wasn’t an implementation, but an adoption. Of course it’s hard to do Agile when you can’t get a server, you’ve got regulatory requirements to consider, you’ve got five main stakeholders for every project, nobody understands the new roles they’ve been asked to play and you’re still running a yearly budgeting cycle – just some of the common problems that I’ve come across in a number of large clients.

Unless you’ve got a sense of urgency so powerful that you’re willing to risk throwing the baby out with the bathwater, incremental change is the way to go, but where do you start, and what do you change first?

The thing I like most about Kanban, and about ESP, is that “start from where you are now” mentality. Sure, it would be fantastic if we could start creating cross-functional teams immediately. But even if we do that, in a large organization it still takes weeks or months to put together any group that can execute on the proposed ideas and get them live, and it’s hard to see the benefits without doing that.

There’s been a bit of a shift in the Agile space away from the notion that cross-functional teams are necessarily where we start, which means we’re shifting away from some of the core concepts of Agile itself.

Dan North and Chris Matts, my long-time friends and mentors, have been busy creating a thing called Business Mapping, in which they help organizations match their investments and budgets to the capacity they actually have to deliver, while slowly growing “staff liquidity” that allows for more flexible delivery.

Enterprise Services Planning achieves much the same result, with a focus on disciplined, data-driven change that I found challenging but exciting: firstly because I realise I haven’t done enough data collection in the past, and secondly because it directs leaders to trust maths, rather than instincts. This is still Kanban, but on steroids: not just people working together in a team, but teams working together; not just leadership at every level, but people using the information at their disposal to drive change and experiment.

The Advent of Adhocracy

Professor Julian Birkenshaw’s keynote was the biggest paradigm shift I’ve experienced since Dave Snowden introduced me to Cynefin, and those of you who know how much I love that little framework understand that I’m not using the phrase lightly.

Julian talks about three different ages:

The Industrial Age: Capital and labour are scarce resources. Creates a bureaucracy in which position is privileged, co-ordination achieved by rules, decisions made through hierarchy, and people motivated by extrinsic rewards.

The Information Age: Capital and labour are no longer scarce, but knowledge and information are. Creates a meritocracy in which knowledge is privileged, co-ordination achieved by mutual adjustment, decisions made through logical argument and people motivated by personal mastery.

The Post-Information Age: Knowledge and information are no longer scarce, but action and conviction are. Creates an adhocracy in which action is privileged, co-ordination is achieved around opportunity, decisions are made through experimentation and people are motivated by achievement.

As Julian talked about this, I found myself thinking about the difference between the start-ups I’ve worked with and the large, global organizations.

I wondered – could making the right kind of information more freely available, and helping people within those organizations achieve personal mastery, give an organization the ability to move into that “adhocracy”? There are still plenty of places which worry about cost per head, when the value is actually in the relationships between people – the value stream – and not the people as individuals. If we had better measurements of that value, would it help us improve those relationships? Would we, as coaches and consultants, develop more of an adhocracy ourselves, and be able to seize opportunities for change as and when they become available?

I keep hearing people within those large organizations make comments about “start-up mindset” and ability to react to the market, but without having Dan and Chris’s “staff liquidity”, knowledge still becomes the constraint, and without having quick information about what’s working and what isn’t, small adjustments based on long-term plans rather than routine experimentation around opportunity becomes the norm.

So I’m going off to get myself more tools, so that I can help organizations to get that information, make sense of it, and create that flexibility; not just in their products and services, but in their changes and adoptions and transformations too.

And I’ll be thinking about this new pattern all the time. It feels like it fits into a bunch of other stuff, but I don’t know how yet.

Julian Birkenshaw says he has a book out next year. I can’t wait.


This article was originally published here.

Correlated in Retrospect by Liz

Published Mon 9 May 2016 21:12

A few  years back, I went to visit a company that had managed to achieve a high level of agility without high levels of coaching or training, shipping several times a day. I was curious as to how they had done it. It turned out to be a product of a highly experimental culture, and we spent a whole day swapping my BDD knowledge for their stories of how they managed to reach the place they were in.

While I was there, I saw a very interesting graph that looked a bit like this:

9wcjjha

“That’s interesting,” I said. “Is that your bug count over time? What happened?”

“Well,” one of them said, “we realised our bug count was growing, so we hired a new developer. We thought we’d rotate our existing team through a bug-fixing role, and we hypothesized that it would bring the bug count down. And it worked, for a while – that’s the first dip. It worked so well, we thought we’d hire another developer, so that we could rotate another team member, and we thought that would get rid of the bugs… but they started going up again.”

“Ah,” I said wisely. “The developer was no good?” (Human beings like to spot patterns and think they understand root causes – and I’m human too.)

“Nope.” They were all smiling, waiting for me to guess.

“Two new people was just too many? They got complacent because someone was fixing the bugs? The existing team was fed up of the bug-fixing role?” I ran through all the causes I could think of.

“Nope.”

“All right. Who was writing the bugs?” I asked.

“Nobody.”

I was confused.

“The bugs were already there,” one of them explained. “The users had spotted that we were fixing them, and started reporting them. The bug count going up… that was a good thing.”

And I looked at the graph, and suddenly understood. I didn’t know Cynefin back then, and I didn’t understand complexity, but I did understand perverse incentives, and here was a positive version. In retrospect, the cause was obvious. It’s the same reason why crime goes up when policemen patrol the streets; because it’s easier to report it.

Conversely, a good way to have a low bug count is to make it hard to report. I spent a good few years working in Waterfall environments, and I can remember the arguments I had about whether something in my work was genuinely a bug or not… making it much harder for anyone testing my code, which meant I looked like a good developer (I really wasn’t).

Whenever we do anything in a complex system, we get unexpected side-effects. Another example of this is the Hawthorne effect, which goes something like this:

“Do you work better in this factory if we turn the lights up?”

“Yes.”

“Do you work better if we turn the lights down?” (Just checking our null hypothesis…)

“Yes.”

“What? Um, that’s confusing… do you work better with the lights up, or down?”

“We don’t care; just stop watching us.”

We’ve all come across examples of perverse incentives, which are another kind of unintended consequence. This is what happens when you turn measurements into targets.

When you’re creating a probe, it’s important to have a way of knowing it’s succeeding or failing, it’s true… but the signs of success or failure may only be clear in retrospect. A lot of people who create experiments to try get hung up on one hypothesis, and as a result they obsess over one perceived cause, or one measurement. In the process they might miss signs that the experiment is succeeding or failing, or even misinterpret one as the other.

Rather than having a hypothesis, in complexity, we want coherence – a realistic reason for thinking that the probe might have a good impact, with the understanding that we might not necessarily get the particular outcome we’re thinking of. This is why I get people creating probes to run through multiple scenarios of success or failure, so they think about what things they might want to be watching, or how they can put appropriate signals in place, to which they can apply some common sense in retrospect.

As we’ve seen, watching is itself an intervention… so you probably want to make sure it’s safe-to-fail.


This article was originally published here.

Extreme YAGNI: How BDD nails your prototyping stage by Chris

Published Wed 4 May 2016 08:51

prototyping

Sometimes people don’t see the value in the BDD process. They contend that the BDD ceremonies are a waste of time, and get in the way of delivering real features to customers. Others cannot see how to apply BDD to their project, as no-one knowns exactly what the project will look like yet. As they’re only in the prototyping stage, by the time a feature file is written and made executable, it’s already out of date.

I don’t agree with this. If our process is set up right, we can prototype using just as effectively and retain the collaboration benefits that BDD gives us.

You Ain’t Gonna Need It

One of the biggest wins that Test-driven Development (TDD) gives us is the principle of YAGNI - “You Ain’t Gonna Need It”. It’s very tempting when writing code to go off on a tangent and produce a beautiful structured work of art that has zero practical use. TDD stops us doing this by forcing us only to write code that a test requires. Even some experts who don’t practice or encourage TDD often espouse the power of writing the calling code first in order to achieve much the same effect.

BDD gives us the same YAGNI win: but at a level higher than TDD. With the BDD cycle we’re adding thin slices of customer observable behaviour to our systems. If we can only write the code thats directly used by the business, then in theory we should be cutting down on wasteful development time.

However, there’s a snag here. if we’re prototyping, we don’t know whether this feature will make it into the final product. We still need to give feedback to our product team, so we need to build something. If the feature is complex, it might take a while to build it, and the feature might never get used. Why bother going through the process of specifying the feature using BDD and Cucumber features?

Happily, we can take YAGNI a level further to help us out.

Extreme YAGNI

Often in TDD, and especially when teaching it, I will encourage people to take shortcuts that might seem silly in their production code. For example, when writing a simple supermarket checkout class in Javascript, we might start with a test like this:

    var checkout = new Checkout();
    expect(checkout.total()).toEqual(0);

Our test defines our supermarket checkout to have a total of zero on creation. One simple way to make this work would be to define the following class:

    var Checkout = function() {
      this.total = function() {
        return 0;
      };
    }

You might think that’s cheating, and many people define a member variable for total, set it to 0 in the constructor, and miss this step out entirely. There is however an important principle at stake here. The code we have does exactly what the test requires it too. We may not need a local variable to store total at all.

Here’s the secret: we can practice this “extreme YAGNI” at the level of our features, too. If there’s a quick way to make our feature files work, then there’s nothing to stop us taking as many shortcuts as we can to get things working quickly.

For example, if we’re testing the user interface of our system via Cucumber features, one fast way to ensure things are working is to hard code the values in the user interface and not implement the back end behaviour too early. Why not make key pages static in your application, or hard code a few cases so your business gets the rough idea?

Again, you might think that’s cheating, but your features pass, so you’ve delivered what’s been asked for. You’ve spent the time thrashing out the story in a 3 amigos meeting, so you gain the benefits of deliberately discovering your software. You’re giving your colleagues real insight to guide the next set of stories, rather than vague guessing up front. Our UX and design colleagues now have important feedback through a working deployed system very quickly, and quick feedback through working software is a core component of the agile manifesto.

By putting off implementing the whole feature until later, we can use BDD to help us navigate the “chaotic” Cynefin space rather than just the “complicated” space. This in theory makes BDD twice as useful to our business.

Fast, fluid BDD

This all assumes that we have a fast, fluid BDD process, with close collaboration built in. If it takes a week to coordinate everyone for the next feature file, then the temptation is to have a long meeting and go through too many features, without a chance to pause, prototype, deliver and learn from working software. Maybe it’s time to re-organise those desks and sit all the members of your team together, or clean up your remote working practices, or block out time each day for 3 amigo sessions. You’ll be suprised how much small changes speed your team up.

This article was originally published here.

The Wrong Abstraction by Sandi Metz

Published Wed 20 Jan 2016 20:30

I originally wrote the following for my Chainline Newsletter, but I continue to get tweets about this idea, so I'm re-publishing the article here on my blog. This version has been lightly edited.


I've been thinking about the consequences of the "wrong abstraction." My RailsConf 2014 "all the little things" talk included a section where I asserted:

duplication is far cheaper than the wrong abstraction

And in the summary, I went on to advise:

prefer duplication over the wrong abstraction

This small section of a much bigger talk invoked a surprisingly strong reaction. A few folks suggested that I had lost my mind, but many more expressed sentiments along the lines of:

The strength of the reaction made me realize just how widespread and intractable the "wrong abstraction" problem is. I started asking questions and came to see the following pattern:

  1. Programmer A sees duplication.

  2. Programmer A extracts duplication and gives it a name.

    This creates a new abstraction. It could be a new method, or perhaps even a new class.

  3. Programmer A replaces the duplication with the new abstraction.

    Ah, the code is perfect. Programmer A trots happily away.

  4. Time passes.

  5. A new requirement appears for which the current abstraction is almost perfect.

  6. Programmer B gets tasked to implement this requirement.

    Programmer B feels honor-bound to retain the existing abstraction, but since isn't exactly the same for every case, they alter the code to take a parameter, and then add logic to conditionally do the right thing based on the value of that parameter.

    What was once a universal abstraction now behaves differently for different cases.

  7. Another new requirement arrives.
    Programmer X.
    Another additional parameter.
    Another new conditional.
    Loop until code becomes incomprehensible.

  8. You appear in the story about here, and your life takes a dramatic turn for the worse.

Existing code exerts a powerful influence. Its very presence argues that it is both correct and necessary. We know that code represents effort expended, and we are very motivated to preserve the value of this effort. And, unfortunately, the sad truth is that the more complicated and incomprehensible the code, i.e. the deeper the investment in creating it, the more we feel pressure to retain it (the "sunk cost fallacy"). It's as if our unconscious tell us "Goodness, that's so confusing, it must have taken ages to get right. Surely it's really, really important. It would be a sin to let all that effort go to waste."

When you appear in this story in step 8 above, this pressure may compel you to proceed forward, that is, to implement the new requirement by changing the existing code. Attempting to do so, however, is brutal. The code no longer represents a single, common abstraction, but has instead become a condition-laden procedure which interleaves a number of vaguely associated ideas. It is hard to understand and easy to break.

If you find yourself in this situation, resist being driven by sunk costs. When dealing with the wrong abstraction, the fastest way forward is back. Do the following:

  1. Re-introduce duplication by inlining the abstracted code back into every caller.
  2. Within each caller, use the parameters being passed to determine the subset of the inlined code that this specific caller executes.
  3. Delete the bits that aren't needed for this particular caller.

This removes both the abstraction and the conditionals, and reduces each caller to only the code it needs. When you rewind decisions in this way, it's common to find that although each caller ostensibly invoked a shared abstraction, the code they were running was fairly unique. Once you completely remove the old abstraction you can start anew, re-isolating duplication and re-extracting abstractions.

I've seen problems where folks were trying valiantly to move forward with the wrong abstraction, but having very little success. Adding new features was incredibly hard, and each success further complicated the code, which made adding the next feature even harder. When they altered their point of view from "I must preserve our investment in this code" to "This code made sense for a while, but perhaps we've learned all we can from it," and gave themselves permission to re-think their abstractions in light of current requirements, everything got easier. Once they inlined the code, the path forward became obvious, and adding new features become faster and easier.

The moral of this story? Don't get trapped by the sunk cost fallacy. If you find yourself passing parameters and adding conditional paths through shared code, the abstraction is incorrect. It may have been right to begin with, but that day has passed. Once an abstraction is proved wrong the best strategy is to re-introduce duplication and let it show you what's right. Although it occasionally makes sense to accumulate a few conditionals to gain insight into what's going on, you'll suffer less pain if you abandon the wrong abstraction sooner rather than later.

When the abstraction is wrong, the fastest way forward is back. This is not retreat, it's advance in a better direction. Do it. You'll improve your own life, and the lives of all who follow.

News:

Public POOD course coming to San Francisco!

I'm pleased to announce that I'll be teaching a public Practical Object-Oriented Design course in San Francisco on May 9-11, 2016, fondly named POODGATE. And (drum roll) I'm doubly pleased to announce that Avdi Grimm, Head Chef at Ruby Tapas, will be the co-instructor.

This might be your one opportunity to spend three fun-filled days in a room with the two of us. :-) Last fall's New York City course filled very quickly; if you're interested, don't procrastinate. Tickets are on sale now!

99Bottles Book

The beta version of my 99Bottles book will available in February. News, and even better, discounts will be announced on the 99Bottles mailing list. This is a seriously low traffic list, so you will come to no harm if you sign up now.

Permalink

This article was originally published here.

BDD: A Three-Headed Monster by Liz

Published Mon 14 Dec 2015 19:31

676px-Hercules_and_Cerberus_LACMA_65.37.151Back in Greek mythology, there was a dog called Cerberus. It guarded the gate to the underworld, and it had three heads.

There was a great guy called Heracles (Hercules in Latin) who was a demi-god, which means he would have been pretty awesome if the gods hadn’t intervened to make go mad and kill his entire wife and family. Greek gods in general aren’t very nice, and you wouldn’t want them to visit at Christmas.

Heracles ended up atoning for this with twelve tasks, the last of which was to capture Cerberus himself. (It was meant to be ten tasks, but his managers decided that he had collaborated with someone from another team on one of them, and got paid by an external stakeholder for a second, so they didn’t count.)

Fortunately, it turns out that BDD’s a bit easier to tame than Cerberus, and it works best if you involve other people from the outset.

The first thing we do with BDD is have a bit of a conversation. If I know nothing about a project, I’ll ask someone to tell me about it, and whenever I think it’s useful, I ask the magic question: “Can you give me an example?”

If they aren’t specific enough – for instance, they say that there are these twelve labours they have to do – I’ll get them to be more specific. “Can you give me an example of a labour?” Labour to me means Jeremy Corbyn, not wrestling the Nemean Lion, so having these conversations helps me to find out about any misunderstandings early.

Eventually, we end up with something we can think about in concrete terms. There’s a template that we use in BDD – Given a context, when an event happens, then an outcome should occur – but even without the template, having a concrete example which starts from some context, has an event happen and ends with a desired outcome (or the best possible outcome that we can reasonably expect to happen, at least) is useful. It lets us talk about those scenarios, and ask questions about other scenarios that might exist.

Exploration by Example (what it could do)

I like to ask two questions, the patterns for which I call Context Questioning, and Outcome Questioning. “Is there any other context, which, for the same event, gives us a different outcome?” And, “Is there any other outcome that’s important?”

The first is easy. We can usually think of ways that our outcome might be thwarted, and if we can’t, then our testers can, because testers have that “break all the things!” mindset that causes them to think of scenarios that might go a different way to the way we expect.

The second is especially useful though, because it lets us talk about what other stakeholders might need to be involved, and what they need. This is particularly important if there’s a transaction happening – both stakeholders get what they want from it, or three if you’re buying a product or service via a third party like Uber. Without it, you might end up getting one stakeholder’s desired outcome but missing the other.

“Why did they leave that wooden horse out there anyway?”

“We should ask a tester. Do we have any left that haven’t been eaten by sea-serpents?”

With these two questioning patterns, we can start to explore the scope of our requirements. We now know what the system could do. What should it do? By deciding which scenarios are out of scope, we narrow down the requirements. If we don’t know enough to decide what our scenarios ought to look like, we can either do something to try it out in a way that’s safe-to-fail and get some understanding (useful if nobody in the organization has ever done it before) or we can find an expert to talk to (if they’re available).

If you find you never talk about scenarios which you decide are out-of-scope or irrelevant, either quickly or later on, you may not be exploring enough.

Specification by Example (what it should do)

Now we’ve narrowed it down to what it should do, and we’ve got some scenarios to illustrate those aspects of behaviour, we can start turning the ideas into reality. The scenarios give us a focus, and let us continue to ask questions. If we ever come across an area we’re not familiar with, we can fall back into exploration, until we have understanding and can specify once more what our system should do.

“So, if we frown really fiercely and heavily at Charon, he should let us past to get to Cerberus. What does a really fierce frown look like? Can you give me an example?”

“Well, imagine someone just used the words ‘target velocity’ in your hearing…”

“Oh, like this?

And, once we’ve made our ideas into something real, we can use our scenarios to see whether it does what we thought it should do – a test.

Test by Example (what it does)

We can run through the scenario again, either manually or using automation, to see if it works.

And, ideally, we write the automated version of the test first, so that it gives us a very clear focus and some fast feedback; though if we’re in high-uncertainty and keep having to fall back to exploration, we might want to lay off of that for a bit.

“So, what did happen?”

“Um, I chopped off a head, and it grew back. Twice…”

“Oh, sugar. Hold on… I think Jason ran into the same behaviour last week. I can’t remember whether it was intended or not…”

And that’s it.

Some people really focus on the tools and testing. Others focus on specification. Really, though, it’s exploration that we start with; those thought-experiments that are cheap to change, and not nearly as dangerous as the real thing.

In our software, we’re not even dealing with three-headed dogs or mythical monsters; just people who want things. They aren’t even intent on making it hard for us. Even better, those people often find value in small things, and don’t need us to finish all twelve tasks to have a completed story. It’s pretty easy to explore what they want using examples, then use the results of that conversation as specifications, then as tests.

Even so, if you do manage to tame BDD, with all three of its heads, you’re still pretty awesome.

Just remember that a puppy is not just for Christmas.


This article was originally published here.

The mob rules, ok? by Steve Tooke

Published Tue 1 Dec 2015 00:00

During the last 6 weeks or so, I’ve had the pleasure to be working on Cucumber Pro with the team at Cucumber Limited. One of the key thing making this such a good experience has been the way we’ve been working. Mob Programming.

What is Mob Programming?

All the brilliant people working on the same thing, at the same time, in the same space, and on the same computer — @woodyzuill

Mob Programming is a term coined by Woody Zuill. It describes a practice that he and his team “discovered” while he was coaching at Hunter Industries. It’s a way of working where the whole team gather around a single computer and work on a single problem together. The team take turns to “drive” the computer, while the other members of the team help to think through the problem and find solutions.

A Remote Mob

The Cucumber Pro team works remotely. We are geographically distributed (although we are usually in similar timezones). Obviously this makes sharing a computer more of a challenge, but we’ve found a couple of solutions that are working well for us.

The first thing is that the person driving always works on their computer. This allows everyone to use the tools they are most comfortable with and saves them from them having to deal with lag or other connection problems on input.

To share the driver’s computer with the rest of the team we have mostly used Screenhero. Screenhero allows us to share a single computer with several other participants (I think we’ve had up to 5 or 6). Unlike other screensharing technology it also gives each user a mouse pointer. This is especially useful when trying to point out where that misspelt variable is hiding. Screenhero also allows the navigators to type, which helps from time to time.

While Screenhero does provide a voice channel, we generally prefer to use Google Hangouts for voice and video. Partly because the sound is better, but really because being able to see each other is great!

We haven’t found a really good solution for a shared whiteboard yet. Most of the drawing we’ve done has been on paper and shared with photographs. We’ve also experimented with an iPEVO camera. This lets you share a drawing live as it happens. We’ve used it point to paper on the desktop, and with a whiteboard. This is a bit more of an interactive experience, but it still only allows one person to draw.

Mornings only

We decided that the Cucumber Pro mob would only convene in the mornings. This gives us 3.5 focussed hours where we all work together. These morning mob sessions are where we take design decisions. We discuss the work that’s to be done. Talk through the business, and find examples that we can use to illustrate them in our Cucumber features. Its also in these sessions that we write most of the code.

Afternoons are more free-form. For a start everyone in the team has other responsibilities. So this leaves space for this work. Dealing with email, running a business, open-source, etc.

But… it also leaves space for people to think, to read, to experiment, to fix little niggles, to automate tiresome tasks. This space is invaluable. We liberally use TODOs while we are mobbing. We use them in the same way we might note something we want to address later on an index card. Fixing TODOs in the afternoon has been quite common. Sometimes this is just tidying up and getting work out of the way, so the mob can focus on bigger tasks. Sometimes this is a spike to try out some idea before presenting it back to the mob.

Pull requests

We use Github’s pull requests in a couple of different ways. Firstly, any work that people undertake outside of the mob (in the afternoon), is almost always done on a pull-request. This allows us to use GitHub as the communication channel about the code, and it means that work that is done indivdually is seen by someone else before its merged.

We have also been using pull requests for work-in-progress. Not everyone on the Cucumber Pro team is available everyday. There’s often someone away delivering training or consulting, or at a conference. Again pull requests let us use Github’s great tooling for seeing changes to the code over time, and having asynchronous discussions with those who weren’t able to join the mob.

Daily retrospective

We end every mob session with a short retrospective. We ask ourselves two questions:

  • What have we learnt?
  • What puzzles us?

We use this as a chance to reflect on the work we have done, and how things went. We try to recognise things that have gone well so we can do more of them, and recognise problems early so that we can head them off.

We also spend a few minutes thinking about the next steps, where the mob’s focus should go next.

We write all of this up in a file at the root of the project and commit it to the repository. This is helpful for the team members that weren’t in the mob session. It helps to share what we’ve learnt and our questions with them. It also marks where the mob finished that day.

We’re currently adding each retrospective at the top of a single file, and maintaining a history. I’m confident that it will be useful to reflect back on how our thoughts and feelings about the project change over time.

Joy

Mob programming is a great way to build a team. I feel that we get a real sense that we’re working together towards a common goal. We solve problems together. We learn together and we teach each other. By reflecting on each session, we learn more about how each of us likes to work, and how we can all help each other.

The remote working lets us all be comfortable in our surroundings. We’ve had Matt join for a few hours while he’s been in Australia. The last couple of days Aslak has been in the mob, with his new baby nestled in a sling — there is something really calming about hearing contented baby gurgles while your working.

Remote collaboration is quite an intense way to work. I’ve done quite a lot of remote pair programming and it can be quite draining. Keeping the afternoons free really helps to combat this.

Working in the mob everyday is fantastic. I look forward to them because they’re fun, and I feel like we’re growing as a team every day — but the afternoon space is just as important.

This article was originally published here.

Capabilities and Learning Outcomes by Liz

Published Thu 17 Sep 2015 17:37

When I started training, I taught topics. Lots of topics!

Nowadays, thanks to some help from Marian Willeke and her incredible understanding of how adults learn, I get to teach capabilities instead. It’s much more fun. This is how I do it.

First off, because I’m into BDD and hypnosis, I sit and imagine some scenarios in which people actually use the learning I’ve given them. Maybe they’re the Product Owner of a Scrum Team, or using BDD for the first time, or they have a good understanding of Agile, and now they’re learning how to coach. I watch them in my head and look at what they do, or I think about what I’ve done, in similar situations.

As with all scenarios, the event that’s happening requires capabilties; the ability to do something, and do it well.

So, for instance, I imagine a team sitting together in a huddle, talking through BDD’s scenarios. Well, you’ll need to be able to use the different strengths of the different roles. And you’ll need to be able to construct well-formed scenarios, and to differentiate between acceptance criteria and a specific example.

If I get stuck thinking about what capabilities I need to teach, I go look at Bloom’s Taxonomy, and the Revised Cognitive Domain – I really like Don Clark’s site. Marian gives some advice; when you’re teaching adults, aim higher than merely remembering; give them something they can actually do with it. The keywords help me to think about the level of expertise that the learners will need to get to (though I don’t always stick to them).

So for instance, I end up with capabilities like these:

  • Explain BDD and its practices
  • Apply shortcuts to well-understood requirements to reduce analysis and planning time
  • Identify core and incidental stakeholders for a project

If I’m training, I use these in conjunction with a bit of teaching, then games or exercises that help attendees really experience the things they’re able to do for the first time, and give me a chance to help them if I see they need it. The learning outcomes make a great advert for the course, too! And I use them as a backlog while I’m running the course, so I always know what’s done and what’s next.

More recently, I’ve been using this technique to put together documents which serve the same purpose for people I can’t train directly. I put the learning outcomes at the start: “When you’ve read this, you will be able to…” It’s fun to relate the titles of each section back to the outcomes at the beginning! And, of course, each capability is an embedded command to someone to actually try out the new skill.

Best of all, each capability comes with its own test. As the person writing the course or document, I can think to myself, “If my student goes on this course or reads this document, will they be able to do  this thing?”

And, if they do actually take the course, I can ask them directly: “Do you feel confident now about doing this thing?” It gives me a chance to go over material if I have time, or to offer follow-up support afterwards (which I generally offer with all my courses, anyway).

You can read more about Bloom’s Taxonomy, and see the backlog for one of my BDD courses, on Marian’s site.

Now you should be able to create courses using capabilities, instead of topics. Hopefully you really want to, as well… but the Affective Domain, and what you can do with it, is a topic for another post.


This article was originally published here.

Hands-on With the Cucumber Events API by Steve Tooke

Published Mon 14 Sep 2015 00:00

Cucumber Ruby 2.1 introduces the new Events API — a simple way to find out what’s happening while Cucumber runs your features. Events are read-only and simplify the process of writing formatters, and other output tools.

I’ll illustate how to use the API with a worked example that streams Cucumber test results to a browser in real-time.

Can you give me an example?

As much as we love our console applications, we can get a much richer experience in a web browser. How could we get Cucumber to push information into a nice web UI, without losing the rich information available with the built-in formatters?

Let’s build a super-simple example using the Events API that uses a websocket to update a web page while cucumber is running.

There’s lots of ways to run a websocket server – a favourite of mine is to use websocketd because it’s super simple. Give it an executable that reads STDIN and write STDOUT and you’re done!

For our very simple websocket reporter we are going to use a UNIX named pipe to push information out of our cucumber process. To get these events out onto a websocket we need a shell command that reads from a named pipe and echos back onto STDOUT.

subscriber.sh

#!/bin/bash

fifo_name="events";

[ -p $fifo_name ] || mkfifo $fifo_name;

while true
do
  if read line <$fifo_name; then
    echo $line
  fi
done

Make sure the script is executable with chmod +x subscriber.sh.

When you run it, it will create an events named pipe if one doesn’t exist already, then wait until there is data on the pipe for it to read. We can see it in action by putting some data on to the pipe: echo "hello, world" > events.

Writing Cucumber Events to the pipe

Let’s start by asking cucumber to write messages to the pipe. Add the following to features/support/env.rb

EVENT_PIPE = "events"
unless File.exist?(EVENT_PIPE)
  `mkfifo #{EVENT_PIPE}`
end

publisher = File.open(EVENT_PIPE, "w+")
publisher.sync = true
publisher.puts "started"

at_exit {
  publisher.puts "done"
  publisher.close
}

This doesn’t use the Events API yet, but we’ve got the plumbing in place now to write to the same named pipe as subscriber.sh will read from. With subscriber.sh up and running, you should be able to run cucumber and see started and done output to the terminal by subscriber.sh.

For our simple web-browser cucumber reporter we want to show each step that cucumber runs, and its result. We want cucumber to tell us when it starts to execute, when it starts to run each step, when it finishes a step (and what the result was) and when it’s finished executing.

We’ll send some formatted JSON that give us some information about the events:

{
  "event": "event_name",
  "data": {} //information about the event
}

We can modify features/support/env.rb to give us the start and end events:

require 'json'

EVENT_PIPE = "events"
unless File.exist?(EVENT_PIPE)
  `mkfifo #{EVENT_PIPE}`
end

publisher = File.open(EVENT_PIPE, "w+")
publisher.sync = true
publisher.puts({event: "started", data: {}}.to_json)

at_exit {
  publisher.puts({event: "done", data: {}}.to_json)
  publisher.close
}

The Cucumber Events API gives us access to what’s going on inside Cucumber while it’s running our features. We want to know when a step is going to be run, and what happened when it finished. Cucumber provides us the BeforeTestStep and AfterTestStep events. To hear about these events we can use the cucumber AfterConfiguration hook to get access to the current config, and add handlers for specific events with the on_event method:

AfterConfiguration do |config|
  config.on_event :before_test_step do |event|
  end

  config.on_event :after_test_step do |event|
  end
end

Putting this all together we can modify features/support/env.rb to push these events out onto our named pipe too:

require 'json'

EVENT_PIPE = "events"
unless File.exist?(EVENT_PIPE)
  `mkfifo #{EVENT_PIPE}`
end

publisher = File.open(EVENT_PIPE, "w+")
publisher.sync = true

AfterConfiguration do |config|
  publisher.puts({event: "started", data: {}}.to_json)

  config.on_event :before_test_step do |event|
    publisher.puts(
      {
        event: "before_test_step",
        data: {}
      }.to_json
    )
  end

  config.on_event :after_test_step do |event|
    publisher.puts(
      {
        event: "after_test_step",
        data: { result: event.result.to_s }
      }.to_json
    )
  end
end

at_exit {
  publisher.puts({event: "done", data: {}}.to_json)
  publisher.close
}

Now if you run Cucumber, with subscriber.sh up and running you should see something like:

$ ./subscriber.sh
{"event":"started","data":{}}
{"event":"before_test_step","data":{}}
{"event":"after_test_step","data":{"result":"✓"}}
{"event":"before_test_step","data":{}}
{"event":"after_test_step","data":{"result":"✓"}}
{"event":"before_test_step","data":{}}
{"event":"after_test_step","data":{"result":"✓"}}
{"event":"before_test_step","data":{}}
{"event":"after_test_step","data":{"result":"✗"}}
{"event":"before_test_step","data":{}}
{"event":"after_test_step","data":{"result":"-"}}
{"event":"done","data":{}}

Hooking up a WebSocket

Great! We’ve got Cucumber sending our events. We now want to get these events pushed into a web-page using a websocket.

websocketd lets us hook our subscriber.sh command up to a websocket. Let’s have a look at what happens using websocketd’s devconsole mode:

$ websocketd --port=8080 --devconsole ./subscriber.sh

Then point your browser to [http://localhost:8080] and you should see:

Clicking the little “✔” in the top left will connect the websocketd’s dev console to the running socket. Now if you echo some text on to the named pipe, you will see it appear in the console on the web browser. Now running Cucumber again, you should see something like this in the web browser:

WebSocket Cucumber

Finishing everything up, lets create a simple web-page that uses the websocket to get information from Cucumber as it’s running. Save this as index.html:

<!DOCTYPE html>
<html>
<head>
</head>
<body>
<h1>Cucumber Runner</h1>
<p id="status">disconnected</p>
<div id="runner"></div>
<script>
  // helper function: log message to screen
  function stepStarted() {
    var runner = document.getElementById("runner");
    var resultNode = document.createElement("span");
    resultNode.textContent = "*";
    runner.appendChild(resultNode);
  }

  function stepResult(result) {
    var resultNode = document.getElementById("runner").lastElementChild;
    resultNode.textContent = result;
  }

  function clearRunner() {
    document.getElementById('runner').innerHTML = "";
  }

  function statusWaiting() {
    document.getElementById('status').textContent = "waiting";
  }

  function statusRunning() {
    document.getElementById('status').textContent = "running";
  }

  function statusDisconnected() {
      document.getElementById('status').textContent = "disconnected";
    }

  function done() {
    statusWaiting();
  }

  var CucumberSocket = function() {
    var ws = new WebSocket('ws://localhost:8080/');
    var callbacks = {};

    this.on = function(event_name, callback){
      callbacks[event_name] = callback;
      return this;
    }

    var dispatch = function(event_name, message){
      var callback = callbacks[event_name];
      if(typeof callback == 'undefined') return;
      callback(message);
    }

    ws.onmessage = function(event){
      var json = JSON.parse(event.data)
      dispatch(json.event, json.data)
    }

    ws.onclose = function(){dispatch('close',null)}
    ws.onopen = function(){dispatch('open',null)}
  };

  var cucumber = new CucumberSocket();

  cucumber.on('open', statusWaiting);

  cucumber.on('close', statusDisconnected);

  cucumber.on('started', function() {
      statusRunning();
      clearRunner();
  });

  cucumber.on('before_test_step', function(data) {
    stepStarted();
  });

  cucumber.on('after_test_step', function(data) {
    stepResult(data.result);
  });

  cucumber.on('done', function() {
    statusWaiting();
  })
</script>
</body>
</html>

Using websocketd’s static site server we can get our little web page up and running: websocketd --port=8080 --staticdir=. ./subscriber.sh and open http://localhost:8080. Now running Cucumber should show you progress in the web page!

Cucumber Websocket

What events are available?

Cucumber 2 introduced a new model for executing a set of features. Each scenario is now compiled into a suite of Test Cases, each made up of Test Steps. Test Steps include Before and After hooks. Cucumber fires the following 5 events based on that model.

What can I use it for?

The Events API is there for getting information out of Cucumber. It’s going to be the best way to write new formatters in future — the old formatter API will be removed in Cucumber 3.0. If you’re looking for a way to contribute to Cucumber then rewriting some of the old formatters to use the new events API would be a tremendous help.

Any questions please come and join us on our gitter channel or the mailing list. All the code for this blog post is available here.

This article was originally published here.

On Epiphany and Apophany by Liz

Published Wed 9 Sep 2015 11:14

We probe, then sense, then respond.

If you’re familiar with Cynefin, you know that we categorize the obvious, analyze the complicated, probe the complex and act in chaos.

You might also know that those approaches to the different domains come with a direction to sense and respond, as well. In the ordered domains – the obvious and complicated, in which cause and effect are correlated – we sense first, then we categorize or analyze, and then we respond.

In the complex and chaotic domains, we either probe or act first, then sense, then respond.

Most people find action in chaos to be intuitive. It’s a transient domain, after all; it resolves itself quickly, and it might not resolve itself in your favour… and is even less likely to do so if you don’t act (the shallow dive into chaos notwithstanding). We don’t sit around asking, “Hm, I wonder what’s causing this fire?” We focus on putting the fire out first, and that makes sense.

But why do we do this in the complex domain? Why isn’t it useful to make sense of what we’re seeing first, before we design our experiments?

As with many questions involving human cognition, the answer is: cognitive bias.

We see patterns which don’t exist.

The term “epiphany” can be loosely defined as that moment when you say, “Oh! I get it!” because you’ve got a sudden sense of understanding something.

The term “apophany” was originally coined as a German word for the same phenomenon in schizophrenic experiences; that moment when a sufferer says, “Oh! I get it!” when they really don’t. But it’s not just schizophrenics who suffer from this. We all have this tendency to some degree. Pareidolia, the tendency to see faces in objects, is probably the best-known type of apophenia, but we see patterns everywhere.

It’s an important part of our survival. If we learn that the berry from that tree with those type of leaves isn’t good for us, or to be careful of that rock because there are often snakes sunning themselves there, or to watch out for the slippery moss, or that the deer come down here to drink and you can catch them more easily, then you have a greater chance of survival. We’re always, always looking out for patterns. In fact, when we find them, it’s so enjoyable that this pattern-learning, and application of patterns in new contexts, forms the heart of video games and is one reason why they’re horribly addictive.

In fact, our brains reward us for almost seeing the pattern, which encourages us to keep trying… and that’s why gambling is also addictive, because a lot of the time, we almost win.

In the complex domain, cause and effect can only be understood in retrospect.

This is pretty much the definition of a complex domain; one in which we can’t understand cause and effect until after we’ve caused the effect. Additionally, if you do the same thing again and again in a complex domain, it will not always have the same effect each time, so we can’t be sure of which cause might give us the effect. Even the act of trying to make sense of the domain can itself have unexpected consequences!

The problem is, we keep thinking we understand the problem. We can see the root causes. “Oh! I get it!”… and off we blithely go to “fix” our systems.

Then we’re surprised when, for instance, complexity reasserts itself and making our entire organization adopt Scrum doesn’t actually enable us to deliver software like we thought it would (though it might cause chaos, which can give us other opportunities… if we survive it).

This is the danger of sensing the problem in the complex domain; our tendency to assume we can see the causes that we need to shift to get the desired effects. And we really can’t.

The best probes are hypothesis-free.

Or rather, the hypothesis is always, “I think this might have a good impact.” Having a reasonable reason for thinking this is called coherence. It’s really hard, though, to avoid tacking on, “…because this will be the outcome.” In the complex domain, you don’t know what the outcome is going to be. It might not be a good outcome. That’s why we spend so much time making sure our probes are safe-to-fail.

I’ve written a fair bit on how to use scenarios to help generate robust experiments, but stories – human tales of what’s happening or has happened – are also a good way to find places that probes might be useful.

Particularly, if you can’t avoid having a hypothesis around outcomes (and you really can’t), one trick you can try is to have multiple outcomes. These can be conflicting, to help you check that you’re not hung up on any one outcome, or even failure outcomes that you can use to make sure your probe really is safe-to-fail.

Having multiple hypotheses means we’re more likely to find other things that we might need to measure, or other things that we need to make safe.

I really love Sensemaker.

Cognitive Edge, founded by Dave Snowden of Cynefin fame, has a really lovely bit of software called Sensemaker that collects narrative fragments – small stories – and allows the people who write those stories to say something about their desirability using Triads and Dyads and Stones.

Because we don’t know whether a story is desirable or not, the Triads and Dyads that Sensemaker uses are designed to allow for ambiguity. They usually consist of either two or three things that are all good, all bad or all neutral.

For instance, if I want to collect stories about pair programming, I might use a Dyad which has “I want to pair-program on absolutely everything!” at one end, and “I don’t want to pair-program on anything, ever,” at the other. Both of those are so extreme that it’s unlikely anyone wants to be right at either end, but they might be close. Or somewhere in the middle.

In CultureScan, Cognitive Edge use the triad, “Attitudes were about: Control, Vulnerability, or Indifference.” You can see more examples of triads, together with how they work, in the demo.

If lots and lots of people add stories, then we start seeing clusters of patterns, and we can start to think of places where experiments might be possible.

A fitness landscape from Cognitive Edge shows loose and tightly-bound clusters, together with possible directions for movement.

A fitness landscape from Cognitive Edge

In the fitness landscapes revealed by the stories, tightly-bound clusters indicate that the whole system is pretty rigidly set up to provide the stories being seen. We can only move them if there’s something to move them to; for instance, an adjacent cluster. Shifting these will require big changes to the system, which means a higher appetite for risk and failure, for which you need a real sense of urgency.

If you start seeing saddle-points, however, or looser clusters… well, that means there’s support there for something different, and we can make smaller changes that begin to shift the stories.

By looking to see what kind of things the stories there talk about, we can think of experiments we might like to perform. The stories though have to be given to the people who are actually going to run the experiments. Interpreting them or suggesting experiments is heading into analysis territory, which won’t help! Let the people on the ground try things out, and teach them how to design great experiments.

A good probe can be amplified or dampened, watched for success or failure, and is coherent.

Cognitive Edge have a practice called Ritual Dissent, that’s a bit like the “Fly on the Wall” pattern, but is done in a pretty negative way, in that the group to whom the experiment is being presented critiques it against the criteria above. I’ve found that testers, with their critical, “What about this scenario?” mindsets, can really help to make sure that probes really are good probes. Make sure the person presenting can take the criticism!

There’s a tendency in human beings, though, to analyze their way out of failure; to think of failure scenarios, then stop those happening. Failure feels bad. It tells us that our patterns were wrong! That we were suffering from apophany, not epiphany.

But we don’t need to be afraid of apophany. Instead of avoiding failure, we can make our probes safe-to-fail; perhaps by doing them at a scale where failure is survivable, or with safety nets that turn commitments into options instead (like having roll-back capability when releasing, for instance), or – my favourite – simply avoiding the trap of signalling intent when we didn’t mean to, and instead, communicating to people who might care that it’s an experiment we want to try.

And that it might just make a difference.


This article was originally published here.

POODNYC 2015 Scholarships have been awarded by Sandi Metz

Published Wed 26 Aug 2015 12:00

Scholarships for the Oct 19-21 Practical Object-Oriented Design Course (POODNYC) in New York City have been awarded! Winners are listed below, but before I introduce them I'd like to give an overview of the applicant pool and selection process.

I'll be awarding scholarships for future public classes and hope that transparency about how this works will motivate you into talking some deserving person into applying, or into applying for your own deserving self.

The POODNC Scholarship

The scholarship includes a seat in POODNYC, and airfare to and lodging in NYC (all courtesy of Hashrocket, to whom I am very grateful). As you can see, it's a full ride. The intent was to remove every financial barrier that would prevent the recipient from attending.

Applicants

There were 24 applicants.

Experience Level:

  • 16 - less than 1 year of experience or currently in school / attending bootcamp
  •  8 - 1+ years
  •  8 - career changers

Gender:

  • 19 - women
  •  5 - men

Minorities/Under Represented:

  • 13 - people of color (4 men, 9 women)
  •  3 - women over 35

Location:

  • 11 - New York
  •  7 - Other states
  •  6 - International (Ecuador, England and Germany)

Selection Criteria

We (me and 2 others) knew that we wanted these scholarships to support good works and/or diversity. The first time we gave scholarships (for the Oct 2014 POODNC course) we supplied very minimal instructions and relied on each person to argue their best case. These instructions merely said 'Tell us why you deserve a scholarship'. This year we added a additional field for ‘Amount of Programming Experience’; more on this below.

While we don't have a rigid checklist by which to evaluate applications, we definitely look favorably upon candidates who:

  1. are engaged in good works
  2. have a moderate amount of programming experience
  3. are demographically diverse from the community at large
  4. have a clear financial need

Good Works:

We preferred candidates with a demonstrated track record of good works, where we define 'good work' as anything from "I spend my spare time on 'Code for America' projects" to "I volunteer as a coach at RailsBridge, RailsGirls, BlackGirlsCode". We preferred candidates who could say "This scholarship will help me do a better job at this thing I am already doing" over candidates who said "This scholarship will make me better".

This year there were so many applicants giving back to the community that we gave additional weight to this criteria.

Experience:

We required some amount of real-world programming experience, 'some' being defined as 'more than a bootcamp'. Folks with very little programming experience have successfully taken this course, but more experienced programmers get correspondingly more out of it. The scholarships are intended as levers to support change; requiring at least 6 months (ish) of real-world programming experience moves the fulcrum and makes each scholarship have more value.

There were a number of applicants who, although they were engaged in all kinds of good works, didn't yet have enough experience to qualify for a scholarship. We regretfully removed them from consideration and urge them to reapply in the future.

Diversity:

We believe that human diversity improves both the software we create and the community in which we work. We were biased towards candidates who differed from the demographic norm (i.e., in age, ethnicity, gender, etc).

Financial need:

While last year we required folks to demostrate a clear financial need, this year a few candidates were engaged in such impressive good works that we were tempted to ignore this criteria. As a result of this experience, we are officially softening our stance on financial need. Although we will continue to take ability to pay into consideration, future scholarship applicants will not be disqualified based on ability to pay.

This year we did not disqualify a single candidate based on our assemement of their ability to pay.

As I said above, we didn't explicitly ask for financial, experience, demographic or good works information but it was easy to get. Many applicants actually included it on their submission and simple web searches unearthed the missing bits.

Evaluation

A 35-something woman of color who was in the midst of career transition while organizing a community meetup and hosting hack-a-thons would rank very high by the criteria above, and a 20-something Caucasian male who was employed as a junior developer, well, not so much.

The applicants for POODNYC were doing so much good for the community that we added ‘Good Works’ to ‘Experience Level’ in order to stay in the running. Applications which survived those tests went on to be evaluated based on ‘Diversity’ and ‘Financial Need’. We narrowed the list from 24 to five finalists (interestingly, like last year, the demographics of the five matched those of the whole), and then selected the two winners.

The Winners Are

Charlotte Chang

Charlotte is a career changer and a recent programming convert. Upon graduating from Flatiron School, she tried freelancing only to discover that "it takes a village to raise a junior developer". She lives in Cleveland, a former industrial powerhouse which has lost 50% of its manufacturing jobs since 1954, and which had a median household income of $26,096 in 2013. Living in a community which needs to shift its focus to building technology skills led Charlotte to give back.

As part of her efforts to refresh the rustbelt, Charlotte is an organizer of Cleveland Agile Group (CleAg) and is very involved in Make on the Lake, an Internet of Things meetup. She has volunteered for Cleveland Give Camp and Canalway Partners*. Charlotte will also be speaking to future technologists at We Can Code It.

Charlotte doesn't want to be just a 'developer', she wants to be a great developer, one with strong coding practices who gives back to the community and becomes a mentor for others. Charlotte currently works at LeanDog.

* No, she did not receive a scholarship because she volunteers for the bike path. This is pure coincidence.

Richard Lau

Richard is a combat veteran who is passionate about helping other veterans transition their career after their service is complete. He is currently building a free program to help veterans, along with their spouses and children, learn programming. Richard is also an advocate for veteran entrepreneurship.

He attended the Web Development Immersive at General Assemb.ly on a Veteran Scholarship. After graduating from the program he started volunteering for RailsBridge NYC. Richard plans to use the course to improve his skills so he can be a better and more knowledgable teacher. He also serves as a co-organizer of the New York VIM meetup.

Richard lives in New York City.

We Want You

As you can see, a candidate who is engaged in good works and is an outlier in every demographic category would be unbeatable, but qualifying on even a subset of these criteria can win you a scholarship. If you, or someone you know, fills the bill, I hope to see your application in the future.

A number of applicants were actively doing good works but did not yet have enough experience to get the most out of the course. If you're one of these folks, I urge you to re-apply in the future. One of today's winners is just such a second time applicant, and proof that additional experience combined with persistence can pay off.

So there you have it ... the POODNYC scholarship winners. Please join me in extending my congratulations to Richard and Charlotte. Our community is improved by their presence. I'm grateful that they're here and gratified to support them along their way.

In closing, one more shout-out to Hashrocket. Their continuing support of POOD course scholarships is a sign of their ongoing commitment to our community and reflects their core values. My thanks.


Sign up for my newsletter, which contains random thoughts that don't quite make it into blog posts.

This article was originally published here.

Negative Scenarios in BDD by Liz

Published Fri 19 Jun 2015 14:04

One problem I hear repeatedly from people is that they can’t find a good place to start talking about scenarios.

An easy trick is to find the person who fought to get the budget for the project (the primary stakeholder) and ask them why they wanted the project. Often they’ll tell you some story about a particular group of people that they want to support, or some new context that’s coming along that the software needs to work in. All you need to do to get your first scenario in that situation is ask, “Can you give me an example?”

When the project or capability is about non-functionals such as security or performance, though, this can be a bit trickier. =

I can remember when we were talking to the Guardian editors about performance on the R2 project. “When (other national newspaper) went live with their new front page,” one editor explained, “their site went down for 3 days under the weight of people coming to look at the page. We don’t want that to happen.”

Or, as another organization said, “We went live, and it crashed. It took three months to get the site up and running again. The code was so awful we couldn’t fix it.”

These kind of negative stories are often drivers, particularly when there are non-functionals involved. You can always handle the requirements through monitoring instead of testing, but the conversation can’t follow the usual, “Can you give me an example?” pattern, because all the examples are things that people don’t want.

Instead, keep that negativity, and ask questions like, “What performance would we need to have to avoid that happening to us? Do we have a good security strategy for avoiding the hacking attempt that ended up with (major corporation)’s passwords getting stolen? How do we make sure we don’t crash when we go live?” Keep the focus on the negatives, because that’s what we want to avoid.

When you come to write the scenarios down, whether it’s in terms of monitoring or a test, it’s often worth keeping that negative around. You can create positive scenarios to look at the monitoring boundaries, but the negative reminds people why they’re doing this.

Given we’ve gone live with the front page
When Tony Blair resigns on the same day
Then the site shouldn’t go down under the weight of people reading that news.

Remember that if you’re the first people to solve the problem then you’ll need to try something out, but if it’s just an industry standard practice, make sure you’ve got someone on the team who’s implemented it before.

Part of the power of BDD’s scenarios is that they provide examples as to why the behaviour is valuable. You’ll need to convert this to positive behaviour to implement it, but if avoiding the negatives is valuable, include those too, even it’s just text on a wiki or blurb at the top of a feature file… and don’t be afraid to start there. Negative scenarios are hugely powerful, especially since they often have the most interesting stories attached to them.


This article was originally published here.

140 is the new 17 by Liz

Published Sat 6 Jun 2015 15:49

I took a break from Twitter.

A while back, I ran a series of talks on Respect for People. I talked about systems which encourage respect being those which are constrained or focused, transparent, and forgiving. I also outlined one system which had none of those, and would therefore have a tendency towards disrespect: Twitter.

Twitter is unforgiving, because it’s public. It isn’t transparent; 140 characters isn’t enough to even get a hint of motivation. It isn’t constrained, except by its 140 characters. And, in a week in which a lot of stuff happened offline, followed by yet another Twitter conversation that left me wondering why I bothered, I had enough.

I decided to take a break to work out what it was I actually valued about Twitter, and why I kept coming back to it, even though I knew it was the system at fault, not the people.

I worked it out. I know, now, what it is I’m looking for. And the best way to explain it is to look to a similarly constrained system: the Haikai no Renga.

I want more Haikai no Renga.

Back in the old days, the Japanese used to get together and have dinner parties.

At these dinner parties, they had a game they played. Do you remember those childhood games where one person would write a line of a story, then the next person would write a line then fold it over, then the next person, so they could only see the line that came before? And then, when the whole story unfolds, there are usually enough coincidences and ridiculousness to make everyone laugh?

This game was a bit like that, but it always started with a poem; and the poem was usually 17 syllables long, arranged as lines of 5, 7, 5 syllables.

Then the next person would add to the poem. Their addition would be 7, 7. The two verses, together, form a tanka. Then the next person would add a 5, 7, 5; and the next a 7, 7, and so on. The poem would grow and change from its original context, and everyone would laugh and have a good time.

This game became so popular that the people who were good at starting the game got called on for their first verse a lot, so they started collecting those verses. Finally, one guy called Shiki realised that those first verses were themselves an art form, and he coined the term haiku, being a portmanteau pun on the haikai no renga, meaning a humorous linked poem, and hokku, meaning first.

In Japanese, haiku work really well with 17 syllables; but in English I think they actually work better with about 13 to 15. That’s because there are aspects of haiku which are far more important than the syllable count.

There’s a word which sets some context, usually seasonal, called the kigo.  More important than that, though, is the kireji, or cutting word. This word juxtaposes two different concepts within those three short lines, and shows how they’re related. We sometimes show it in English with a bit of punctuation.

The important thing about the concepts is that they should be a little discordant. There should be context missing. The human brain is fantastic at creating relationships and seeing patterns, and it fills in that missing context for itself, creating something called the haiku moment.

My favourite example of this came from someone called Michael in one of my early workshops:

hot air
rushing past my face –
mind the gap!

(Michael, if you’re out there, please let me know your surname as I use this all the time and would like to credit you properly.)

If you live in London like I do, having a poem like that, with the subject matter outlined but not filled in, can make you feel suddenly as if you’re right there on the platform edge with the train coming in. You make the poem. It’s the same phenomenon that causes the scenes in books to be so much better than the movies; because your involvement in creating them really makes them come to life.

But because there’s not enough context, and because one person’s interpretation can differ from another, the haikai no renga changes. It moves from one subject to another. It can be surprising. It can be funny. It can be joyful. Each verse builds on what’s come before, and takes it into a new place.

Haikai no Renga is a bit like the “Yes, and…” game.

In the “Yes, and…” improv game, each person builds on what the person before has said.

There was a man who lived in a house with a green door.
Yes, and the door knocker was haunted.
Yes, and it was the ghost of a cat who used to live there.
Yes, and…

In the “Yes, and…” game, nobody calls out the person before. Nobody says, “Wait a moment! How can a door knocker be haunted? That makes no sense!”

Instead, they run with the ridiculousness. They make it into something playful, and carry it on, and gradually the sense emerges.

They don’t say, “No. You’re wrong. Stop.” They say, “Hah, that’s funny. Let’s run with that and see where it takes us.”

If people were to play this game on Twitter, they might do it by making requests for more details. “Yes, and what was it haunted by?” “Yes, and did the man own the cat?”

Or they might provide resources to enlightening information. “Yes, and have you read the stats on door knocker hauntings? .” “Yes, and do you remember that film where the door knockers spoke in riddles?”

Or they might just add another line, and see where it leads.

It starts with a Haiku.

Like the “Yes, and…” game, and like Improv, the haiku which kicks off the haikai no renga has to be an offer; something that the other players or poets can actually play on.

If a haiku were insulting to the other poets, or had some offensive content, then I imagine the poets wouldn’t play, and probably quite rightly you’d get an argument instead of a poem. Equally, though, if the haiku is too perfect, and too complete, then there’s no room for interpretation. There’s no room for other poets to look good, then make their own offers.

And this is where I think I can make a change to the way I’m using Twitter. If I state something as if it’s a fact, or I don’t leave the right kind of opening for conversation, then I won’t get the renga I’m looking for. It doesn’t even matter whether it’s true or not, or whether there’s context missing that other people don’t know about, or whether it’s a quote from someone else; if it’s not an offer, I won’t get the renga. If I’m writing something to make myself, or one of my friends, look knowledgeable and wise, I won’t get the renga. If I’m being defensive, I won’t get the renga. If I’m engaging someone who obviously doesn’t want the renga, well, then I won’t get the renga.

And I want the renga.

So that’s what I’m going to be trying to do. I’ll try to make tweets, in future, which are offers to you, my fellow Tweeters, to join in the game and play with the conversation and see if it takes us somewhere surprising and joyful. I’ll try to make tweets which invite the “Yes, and…” rather than the “No, but…”. And if I do happen to get the “No, but…”, as I’m sure will happen, then I’ll work out what to do at that point. At least, now, I know better what it is that I want.

If you want to play, come join me.

It’s just a tweet… but it could be poetry.


This article was originally published here.

The Estimates in #NoEstimates by Liz

Published Fri 1 May 2015 14:34

A couple of weeks ago, I tweeted a paraphrase of something that David J. Anderson said at the London Lean Kanban Day: “Probabalistic forecasting will outperform estimation every time”. I added the conference hashtag, and, perhaps most controversially, the #NoEstimates one.

The conversation blew up, as conversations on Twitter are wont to do, with a number of people, perhaps better schooled in mathematics than I am, claiming that the tweet was ridiculous and meaningless. “Forecasting is a type of estimation!” they said. “You’re saying that estimation is better than estimation!”

That might be true in mathematics. Is it true in ordinary, everyday English? Apparently, so various arguments go, the way we’re using that #NoEstimates hashtag is confusing to newcomers and making people think we don’t do any estimation at all!

So I wanted to look at what we actually mean by “estimate”, when we’re using it in this context, and compare it to the “probabilistic forecasting” of David’s talk.

Defining “Estimate” in English

While it might be true that a probabilistic forecast is a type of estimate in maths and statistics, the commonly used English definitions are very different. Here’s what Wikipedia says about estimation:

Estimation (or estimating) is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable.

And here’s what it says about probabilistic forecasting:

Probabilistic forecasting summarises what is known, or opinions about, future events. In contrast to a single-valued forecasts … probabilistic forecasts assign a probability to each of a number of different outcomes, and the complete set of probabilities represents a probability forecast.

So an estimate is usually a single value, and a probabilistic forecast is a range.

Another way of phrasing that tweet might have been, “Providing a range of outcomes along with the likelihood of those outcomes will lead to better decision-making than providing a single value, every time.”

And that might have been enough to justify David’s assertion on its own… but it gets worse.

Defining “Estimate” in Agile Software Development

In the context of Software Development, estimation has all kinds of horrible connotations. It turns out that Wikipedia has a page on Software Development Estimation too! And here’s what it says:

Software development effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input.

Again, we’re looking at a single value; but do notice the “high uncertainty” there. Here’s what the page says later on:

Published surveys on estimation practice suggest that expert estimation is the dominant strategy when estimating software development effort.

The Lean / Kanban movement has emerged (and possibly diverged) from the Agile movement, in which this strategy really is dominant, mostly thanks to Scrum and Extreme Programming. Both of these suggest the use of story points and velocity to create the estimates. The idea of this is that you can then use previous data to provide a forecast; but again, that forecast is largely based on a single value. It isn’t probabilistic.

Then, too, the “expertise” of the various people performing the estimates can often be questionable. Scrum suggests that the whole team should estimate, while XP suggests that developers sign up to do the tasks, then estimate their own. XP, at least, provides some guidance for keeping the cost of change low, meaning that expertise remains relevant and velocity can be approximated from the velocity of previous sprints. I’d love to say that most Scrum teams are doing XP’s engineering practices for this reason, but a lot of them have some way to go.

I have a rough and ready scale that I use for estimating uncertainty, that helps me work out whether an estimate is even likely to be made based on expertise.  I use it to help me make decisions about whether to plan at all, or whether to give something a go and create a prototype or spike. Sometimes a whole project can be based on one small idea or piece of work that’s completely new and unproven, the effort of which can’t even be estimated using expertise (because there isn’t any), let alone historical metrics.

Even when we have expertise, the tendency is for experts to remember the mode, rather than the mean or median value. Since we often make discoveries that slow us down but rarely make discoveries which speed us up, we are almost inevitably over-optimistic. Our expertise is not merely inaccurate; it’s biased and therefore misleading. Decisions made on the basis of expert estimates have a horrible tendency to be wrong. Fortunately everyone knows this, so they include buffers. Unfortunately, work tends to expand to fill the time available… but at least that makes the estimates more accurate, right?

One of the people involved in the Twitter conversation suggested we should be using the word “guess” rather than “estimate”. And indeed, that might be mathematically more precise, and indeed, if we called them that, people might be looking for different ways to inform the decisions we need to make.

But they don’t. They’re called “estimates” in Scrum, in XP, and by just about everyone in Agile software development.

But it gets worse.

Defining “Estimate” in the context of #NoEstimates

Woody Zuill found this very early tweet from Aslak Hellesøy using the #NoEstimates hashtag, possibly the first:

@obie at #speakerconf: “Velocity is important for budgeting”. Disagree. Measuring cycle time is a richer metric. #kanban #noestimates

So the movement started with this concept of “estimate” as the familiar term from Scrum and XP. Twitter being what it is, it’s impossible to explain all the context of a concept in 140 characters, so a certain level of familiarity with the ideas around that tag is assumed. I would hope that newcomers to a movement would approach it with curiosity, and hopefully this post will make that easier.

Woody confessed to being one of the early proponents of the hashtag in the context of software development. In his post on the #NoEstimates hashtag, he defines it as:

#NoEstimates is a hashtag for the topic of exploring alternatives to estimates [of time, effort, cost] for making decisions in software development.  That is, ways to make decisions with “No Estimates”.

And later:

It’s important to ask ourselves questions such as: Do we really need estimates? Are they really that important?  Are there options? Are there other ways to do things? Are there BETTER ways to do thing? (sic)

Woody, and Neil Killick who is another proponent, both question the need for estimates in many of the decisions made in a lot of projects.

I can remember getting the Guardian’s galleries ready in time for the Oscars. Why on earth were we estimating how long things would take? That was time much better spent in retrospect on getting as many of the features complete as we could. Nobody was going to move the Oscars for us, and the safety buffer we’d decided on to make sure that everything was fully tested wasn’t changing in a hurry, either. And yet, there we were, mindlessly putting points on cards. We got enough features out in time, of course, as well as some fun extras… but I wonder if the Guardian, now far more advanced in their ability to deliver than they were in my day, still spend as much time in those meetings as we used to.

I can remember asking one project manager at a different client, “These are estimates, right? Not promises,” and getting the response, “Don’t let the business hear you say that!” The reaction to failing to deliver something to the agreed estimates was to simply get the developers to work overtime, and the reaction to that was, of course, to pad the estimates. There are a lot of posts around on the perils of estimation and estimation anti-patterns.

Even when the estimates were made in terms of time, rather than story points, I can remember decisions being unchanged in the face of the “guesses”. There was too much inertia. If that’s going to be the case, I’d rather spend my time getting work done instead of worrying about the oxymoron of “accurate estimates”.

That’s my rant finished. Woody and Neil have many more examples of decisions that are often best made with alternatives to time estimation, including much kinder, less Machiavellian ones such as trade-off and prioritization.

In that post above, Neil talks about “using empiricism over guesswork”. He regularly refers to “estimates (guesses)”, calling out the fact that we do use that terminology loosely. That’s English for you; we don’t have an authoritiative body which keeps control of definitions, so meanings change over time. For instance, the word “nice” used to mean “precise”, and before that it meant “silly”. It’s almost as if we’ve come full circle.

Defining “Definition”

Wikipedia has a page on definition itself, which points out that definitions in mathematics are different to the way I’ve used that term here:

In mathematics, a definition is used to give a precise meaning to a new term, instead of describing a pre-existing term.

I imagine this refers to “define y to be x + 2,” or similar, but just in case it’s not clear already: the #NoEstimates movement is not using the mathematical definition of “estimate”. (In fact, I’m pretty sure it’s not using the mathematical definition of “no”, either.)

We’re just trying to describe some terms, and the way they’re used, and point people at alternatives and better ways of doing things.

Defining Probabilistic Forecasting

I could describe the term, but sometimes, descriptions are better served with examples, and Troy Magennis has done a far better job of this than I ever have. If you haven’t seen his work, this is a really good starting point. In a nutshell, it says, “Use data,” and, “You don’t need very much data.”

I imagine that when David’s talk is released, that’d be a pretty good thing to watch, too.


This article was originally published here.

A Dreyfus model for Agile adoption by Liz

Published Wed 22 Apr 2015 07:30

A couple of people have asked for this recently, so just posting it here to get it under the CC licence. It was written a while ago, and there are better maturity models out there, but I still find this useful for showing teams a roadmap they can understand.

If you want to know more about how to create and use your own Dreyfus models, this post might help.

What does an Agile Team look like?

Novice

We have a board
We put our stories on the board
Every two weeks, we get together and celebrate what we’ve done
Sometimes we talk to the stakeholders about it
We think we might miss our deadline and have told our PM
Agile is really hard to do well

Beginner

We are trying to deliver working software
We hold retrospectives to talk about what made us unhappy
When something doesn’t work, we ask our coach what to do about it
Our coach gives us good ideas
We have delegated someone to deal with our offshore communications
We have a great BA who talks to the stakeholders a lot
We know we’re going to miss our deadline; our PM is on it
Agile requires a great deal of discipline

Practitioner

We know that our software will work in production
Every two weeks, we show our working software to the stakeholders
We talk to the stakeholders about the next set of stories they wants us to do
We have established a realistic deadline and are happy that we’ll make it
We have some good ideas of our own
We deal with blockers promptly
We write unit tests
We write acceptance tests
We hold retrospectives to work out what stopped us delivering software
We always know what ‘done’ looks like before we start work
We love our offshore team members; we know who they are and what they look like and talk to them every day
Our stakeholders are really interested in the work we’re doing
We always have tests before we start work, even if they’re manual
We’ve captured knowledge about how to deploy our code to production
Agile is a lot of fun

Knowledgeable

We are going to come well within our deadline
Sometimes we invite our CEO to our show-and-tell, so he can see what Agile looks like done well
People applaud at the end of the show-and-tell; everyone is very happy
That screen shows the offshore team working; we can talk to them any time; they can see us too
We hold retrospectives to celebrate what we learnt
We challenge our coach and change our practices to help us deliver better
We run the tests before we start work – even the manual tests, to see what’s broken and know what will be different when we’re done
Agile is applicable to more than just software delivery

Expert

We go to conferences and talk about our fantastic Agile experiences
We are helping other teams go Agile
Business outside of IT are really interested in what we’re doing
We regularly revisit our practices, and look at other teams to see what they’re doing
The company is innovative and fun
The company are happy to try things out and get quick feedback
We never have to work late or weekends
We deploy to production every two weeks*
Agile is really easy when you do it well!

* Wow, this model is old.


This article was originally published here.