Monday, December 15, 2014

Working Well Enough: the Four Questions

The question asked was "how do we know we need a coach?" The more general question is "how do we know if we need help?"

The question seems to assume that the goal of coaching and training are rescue; that the team is in trouble or incapable of success. That's horsefeathers.

I think that the more valid question is "are we working as well as we can?"

Here are my criteria for teams that don't need anything:

  1. We deliver
  2. Delivering software keeps getting easier
  3. We are always learning
  4. We have fun

If all four of those are true for a team, then that team probably doesn't need help. Of course if those are true, then the team is poised to provide a lot of help to other teams by describing how they work.

Monday, December 8, 2014

My itsy-bitsy contribution to Git

This is from 2005, when I was at an all-linux, all-python shop called Progeny Linux Systems. A lovely time, really. Great people, interesting technology, challenges and opportunities every day. Not everything we did was stellar, but we were moving together in a good direction.

We started using git when it really was a "git" (a stupid person or thing) and the tools were very confusing because they mixed noun names and verbs, and people were just kind of used to it being confusing.

You used to have to go by hand into the innards of the .git directory structor to create tags and branches. 

A "porcelain" was a wrapper around git to make it more useful and tolerable, and ours was specific to building custom Linux distributions. It was a cool project. 

My contribution was very meager: I complained about the inconsistent naming of git tools at the time. 

I started with a question here:
So when this gets all settled, will we see a lot of tool renaming? 
While it would cause me and my team some personal effort (we have a special-purpose porcelain), it would be welcome to have a lexicon that is sane and consistent, and in tune with all the documentation. 
Others may feel differently, I understand. 
Started getting serious here:
Junio C Hamano wrote:
> Tim Ottinger writes:
> > git-update-cache for instance?
> > I am not sure which 'cache' commands need to be 'index' now.

> Logically you are right, but I suspect that may not fly well in
> practice. Too many of us have already got our fingers wired to
> type cache, and the glossary is there to describe both cache and
> index.

I'd vote for cleaning it up /now/. Sure, it will hurt, but if you let time
go by and do it later, it will hurt much more.

Pre-1.0 is the last chance, AFAICS.

Daniel turns it into a plan:
OK. As Horst also says, we should do this before 1.0.

0.99.6::
This hopefully will be done on Sep 7th. Tool renames
will not happen in this release, but the set of cleaned
up names will be discussed on the list during this
timeperiod. I'll draw up a strawman tonight unless
somebody else does it first.

0.99.7::
We install symbolic links for the old names. For the
documentation, we do not bother --- just install under
new names. Also remove support for ancient environment
variable names from gitenv(). Aim for Sep 17th.

0.99.8::
Aim for Oct 1st; we do not install symbolic links
anymore and supply "clean-old-install" target in the
Makefile that removes symlinks installed by 0.99.7 from
DESTDIR. This target is not run automatically from other
usual make targets; it is just there for your convenience.


I said:
> I'll draw up a strawman tonight unless somebody else
> does it first.

1. Say 'index' when you are tempted to say 'cache'.

git-checkout-cache git-checkout-index
git-convert-cache git-convert-index
git-diff-cache git-diff-index
git-fsck-cache git-fsck-index
git-merge-cache git-merge-index
git-update-cache git-update-index


2. The act of combining two or more heads is called 'merging';
fetching immediately followed by merging is called 'pulling'.

git-resolve-script git-merge-script

The commit walkers are called *-pull, but this is probably
confusing. They are not pulling.

git-http-pull git-http-walk
git-local-pull git-local-walk
git-ssh-pull git-ssh-walk

3. Non-binaries are called '*-scripts'.

In earlier discussions some people seem to like the
distinction between *-script and others; I did not
particularly like it, but I am throwing this in for
discussion.

git-applymbox git-applymbox-script
git-applypatch git-applypatch-script
git-cherry git-cherry-script
git-shortlog git-shortlog-script
git-whatchanged git-whatchanged-script


4. To be removed shortly.

git-clone-dumb-http should be folded into git-clone-script


... the rest, as they say, is version control history.


To this day the git toolset is consistent. 

I think it's remarkable now, because the dev team listened to me and answered even though I was not really a part of the group. I didn't have to earn voice, it was okay. 

Even though I did nothing more than ask a question, it has always made git feel like "my" version control system. 

I guess any little interaction that leads to action will create a sense of engagement and participation.

Thanks Git team from 2005. You were great.



Wednesday, November 19, 2014

Otter's Law


For all the 'take back agile" and "agile smagile" and "apologizing for agile" and all the other post-agilists and so-called post-agilists in the world, I give you Otter's law:

Any methodology followed via obligation and knowledge-avoidance cannot produce positive change. 



I've ranted about change gone wrong and methodologies followed badly and horrible oversales and undersales and all, but it comes down to otter's law ultimately.

Too many companies have huge gains from using XP and Scrum and what-have-you. Too many others tried "the same thing" with entirely different results.

Some teams are whole-heartedly into the whole agile thing, and they seem to have pretty good. Others don't seem very excited and don't get much out of it. Is it excitement that they need?

I don't think so. I think that the lack of excitement and the lack of progress have the same root.

I think it's a lack of profluence in agile-as-practiced. Often mandated agile drives people up the old Christopher Avery "responsibility" chart all the way to "obligation." Not to responsibility.

The team doesn't have a sense that they're gaining ground at all. They may be afraid of misjudgment or criticism or failure at every step. They don't feel safe so they don't dig in for all they're worth. They tip their toes into the water, and do what they are required to do so that they don't seem to be resisting the management directive.

Worse, teams can reach a point of total anhedonia. They lose all ability to celebrate wins or feel a sense of accomplishment because nothing they do will ever be good enough. If they dig in really hard, they'll be told that it's nice to see them making some effort, but they're going to have to do a lot more. In such a circumstance, developers (testers included) can't see the sense in changing methodology and terminology. They're all in a state of learned helplessness depression. A wall of colored cards and sticky notes won't change that.

Are they digging into the material to find out how to do a great job? If not, then they are probably operating on "knowledge avoidance." They don't invest in researching because there's really no pay out for them. Instead, they'll do as they're told, as they understand it. There was no real shared mental model involved in the roll-out.

I think that people have reasons for not being gung-ho agilists to begin with. If you try using pressure to oblige (there's that root word again) them to act as if they're excited, the effort is not going to create the excitement you're looking for. It will instead establish a cargo cult.

But some teams do get a lot of excitement from the profluence, do see transformation in their code base and their companies, huge reduction in defects, huge increase in mastery and autonomy in team members, and huge bottom-line effects.

I'm not writing this to blame the people who are trying to push their company into a more ideal agile-like environment. I totally agree with that. Some managers see what is possible and want to adopt XP or Scrum for the way "good agile" can empower their organization. I don't think it's wrong for them to want it.

I'm also not blaming people who are in a system that makes them feel unsafe or guilty writing tests or refactoring existing code. They are citizens of a system. I don't expect them to stick their necks out and risk their position or reputation or standing among their peers.

I'm just thinking that every transformation runs the risk I describe with Otter's Law. If we only push people to obligation, we shouldn't expect to see responsibility automatically bloom.

The methodology doesn't come with its own culture, history, and mythology. It has to take root where you plant it.

Even the world's best teacher can't teach unit testing so well that everyone feels safe doing it. There is a control culture and a guilt culture to confront.

It has to be safe for people to try, even to fail, to adopt new practices or to take control of their code.







Monday, November 3, 2014

Wrong naming revisit


I was looking at the excellent work by appium, and saw this little snippet in an example:


var el = driver.FindElementById ("name_input");

el.Clear ();

el.SendKeys ("Appium User");

el.SendKeys (Keys.Return);

There's nothing horribly wrong with the name el, except that it's exactly what we ask people not to do with names. 

El is short for element (a bit of mental mapping, but the length of the name reflects the shortness of scope). But it is not informative.

The variable "el" (EE-EL) looks like "e1" (EE-ONE) to the casual reader. The name doesn't look right. 

Also, "element" is what it's made of, not what it's for.  Yes it's an element. But it means "the name input field" so "nameField" or"name_field" or "name" might be better. Or even "input" if it's the only input field in the block.

Appium is pretty cool. You should check it out. But remember that writing examples is both something rather important (people copy or imitate examples) and something hard to do. We write them thinking "the api is the important thing" or "the idea is what matters" but we forget how seriously people will take examples.

What is a Coach?

I'm a software org/team/individual coach. What the heck does that mean?

On a closed mailing list, Geepaw asked for our definitions of a software team "coach," and several offered opinions. 

This one is entirely mine. 

On reading my definition, John Kern interpreted it to mean "individuals" whereas I intended these to apply to teams and organizations as well. I thank him for bringing up the possible misinterpretation, and I leave it up to my readers to understand I mean it in a plural sense; organizations and teams have skills and habits and mindsets just as individuals do.

Without further ado:

A coach is someone whose:
1) work is with people
2) primary product is an improvement in their abilities 
3) secondary product is an improvement in the way they interact with teammates and peers.
4) teaching comes through interaction, not mere lecture or advice (that would be a counselor)
5) work is done at the request and permission of his client

I would also note that when we refer to "abilities" above, it is in the same sense that JB Rainsburger mentions, that of increasing capabilities and removing impediments. 


Thursday, October 23, 2014

Preplanning Poker: Is This Story Even Possible?

The story says "attach an ecommerce server."

Well, maybe it says "As a product manager I want my system to incorporate an ecommerce server so that I can connect money."

Can you get that done this iteration? It sounds like a three-story-point effort to me, right?

Hold On A Second

This story doesn't have a plot. It is a state of being. I don't think that  saying "once upon a time there was a little girl" would qualify me as a story teller. 

Right away I'm nervous. What the heck does it mean? What do we want to do here? 

Let's not throw this into the sprint backlog with (of all things) a story point number on it. Let's certainly not stick somebody's name on it. Let's think a little. 

We're not aligned on what this "story" means. 

The New Preplanning Poker

You already know about planning poker, and the benefits of silently estimating first, then comparing results. You know that it helps avoid anchoring and arguing and lets you see the degree of separation in estimates in the team. It's a nice consensus-seeking idea. 

I think we need to apply that concept forward to pre-planning (and to non-estimating teams). We will need five cards for this new preplanning poker, as follows:
  • Defer 
  • Accept 
  • Reject 
  • Explore
  • Split
The astute among you might notice that the acronym for this set of cards accidentally spells DARES. I guess that's okay if we're trying to determine whether we dare tackle this feature as given.

All you really need is five index cards per person and a marker, but if you really want something you can cut and print, try these (I left a blank because six cards looks better than 5):



So here is the story. Pick one of the cards, don't show its face. Ready? One... two.. three

Your answers

You picked defer. Why is this not a good time to add this feature? Why is later better? Is there something far more important to do? Too few developers and testers available this week? Is it scheduling? Availability of people? A cash flow problem? A production system down?

You picked accept. You think that this task is very well-defined and the criteria for success are obvious. You're ready to go, and you know how to do the work. I'm shocked, given the nature of the story but tell us what you know and what inspires your enthusiasm.

You picked reject. You don't think that the system needs an ecommerce system? Why? Do you have another way you'd rather we received money? Do you think this system should not receive money? Do you think that we should use this system in a way other than a money-gathering device? Why should we never do this? 

You picked explore. You think it's a good idea, and we should get involved, but you believe that there are technical issues involved that we don't understand. Is it platforms? Licensing? APIs? Languages? Authorization/Authentication issues? Architectural concerns? What do we need to know in order to move forward?  I think you are likely right - this may not be something you simply bolt on without some exploration of vendors and technologies and market segments. 

You picked split. That means that this story is not really scoped well. Maybe it needs to be rewritten as a series of stories, or a series of releases, so that each increment of this feature will be well-understood and can be tested and possibly documented. In this case, I agree with you. We might need to know who needs to pay, and for what, and what the flow is around each payment scenario. Each point of payment will likely need several stories to cover all the ways it can succeed or fail.

Will It Work?

I have played this game without cards at a few client sites. Sometimes I'm surprised at how vague a story can seem to me, but be perfectly clear to the local development team. Other times I'm surprised in the opposite way. 

We have had great story mapping and story splitting sessions result from these quick 5-way triage games (is that a quintage?) It takes only minutes and you can get a lot of focused discussion and backlog grooming done in a very short time. 

If you try it out, let me know how it worked for you.







Monday, October 20, 2014

Microtesting TDD: A Quick Checklist



Quick pointers:

  1. See each test fail at least once (so you can trust it).
  2. Make test fail messages helpful because they fail when you are working on something else.
  3. Prioritize!
  4. Use a list for tests you want to write. "Ignored" tests will do nicely.
  5. Run all the tests so you know when your last change has broken something.
  6. Keep your feedback loop as tight as you possibly can.

I get to see these all violated, so I thought I'd make a short list and save you some time. 

The first two go together. You want each test to fail so you can see the message. Some time in the future you'll make a change, and an older test will fail and you'll see the test class name, the test name, and the assert message. Those three should work together so that you know what kind of mistake you made. 

That goes with number 5, too. If you only run the one test you're working on, you may have dozens of breakages by the time you get around to running all the tests. If you work on a component team (not my favorite organization, but sometimes necessary) then you should run all the component's tests at least.  The more "distance" between the injection of an error and its detection as an error, the harder it is to isolate and reproduce and fix. That's why #6 is listed. 

That leaves 3 and 4, which are about planning your steps. You might find some power in the idea of a list. Start by thinking, and create a list of tests you want to write. Then pick which one you want to do next based on what is either simpler or most important. When you think of new tests, add them to the list. When you find some tests are no longer important or describe cases that are already covered, you drop them. I learned this by watching Kent Beck's TDD videos.

If I was to shorten the list, I would say: 
  1. Write short tests with great messages
  2. Track and the next small steps you intend to take.
  3. Keep safe by running all tests frequently. 
That's short, but not as actionable. In the long 6-step form, it's a little easier to take on.