I'm not a professional book reviewer, nor paid in anyway to read. But if I could get that gig... I'd be a happy camper. I've never written a book, but I've hacked out some code, a few articles, some of which might be considered book reviews. I've worked in the Agile industry for more than a decade (but who's counting), and so - I may be a little close to the topic to have proper literary impartial bias. In fact let me just go ahead and be explicit - I've done this, been there, got the t-shirt; I shit you not - this shit is for real!
Agile Noir by Lancer Kind
Now the ground rules... I think this review will be written ... what's the word... while I'm reading, at the same time, without much delay in the reading - writing phases....in situ.... iteratively... oh I give up...
So don't be surprised - dear reader - if I just drop off in the middle...
... maybe check back every week until I finish
I've studied the cover... quite a nice graphic - to bad the whole novel isn't a graphic novel; oh - maybe it would be too bloody, I could see Agile Noir becoming a Tarantino film. As I sat looking at my book to-do stack... I skipped a few levels down the stack and pulled out Lancer Kind's 2016 Agile Noir. I have read some of his previous comics titled Scrum Noir (vol 1, 2, 3). So maybe I know what to expect - should be a fun romp in the fast lane with lots of inside the industry puns, innuendo and metaphors.
Well the damn dedication just reeks of an Agile Coach - Servant Leader (puke, barf.... moving on).
The High Cost of Schedule Slip
Now you may not find the situation Kartar finds himself in funny... allow me to add some overtones of irony.... I'm going to go out on a racist limb and suggest that Kartar is an Indian. That he is working in the heart of the Indian nation (Los Wages, NV), perhaps on a job for an Italian crime boss. And none of these circumstances have anything to do with one of the world of science's biggest failures - Columbus's discover of the New World - which the thought was India, and named it's inhabitants there by creating the confusion we will have to deal with evermore. Now Columbus was of course searching for a way to reduce the schedule required for shipping spices.
Kartar appears to be very emerged in planning and the art/science/pesdo-truth of planning and predicting the future of projects. And he may be a master with the Gantt chart (which is footnoted on page 18).
This is all ringing just too true ... and I'm envisioning it in the style of a 1956 black and white film...
Kartar is the metaphor of his project... it seems that it's not quite on schedule... he's late to a just announced meeting with some superior and is driving at break neck speed on loose sand in the Vegas out skirts creating over bumps and ditches in his car with the accelerator pinned to the floor - because some people in a van might be trying to kill him. Happens ALL - THE - TIME.
Scrum Noir - several volumes of graphic novel about scrum masters and the projects they encounter - also by Lancer Kind
I will have a Double Expresso - Amazon review of Scrum Noir.
Join the dialogue on G+ Agile+ group.
Dialogue on Collaboration on Facebook (PDF)
Collaboration starts with who we are and our story - not the technology or the data
"The Future of Work Is Social Collaboration from Inside Out, where people connect around the why of work from who they really are as individuals in community.
They collaborate in generative conversations and co-create what’s next, i.e. their unique Contribution of value to society – what we might call Social Good.
They collaborate by taking the time to appreciate and align each other’s unique, hard wired, natural strengths, creating new levels of authentic and trusting relationships to take the Social Journey."Jeremy ScrivensDirector at The Emotional Economy at Work
What does dialogue mean... what does it contribute to collaboration? Here's what the inventor of the internet Al Gore had to say about this:
Audie Cornish speaks with former Vice President Al Gore about the new edition of his book, The Assault On Reason.
Well, others have noted a free press is the immune system
of representative democracy. And as I wrote 10 years ago, American democracy is in grave danger from the changes in the environment in which ideas either live and spread or wither and die. I think that the trends that I wrote about 10 years ago have continued and worsened, and the hoped-for remedies that can come from online discourse have been slow to mature. I remain optimistic that ultimately free speech and a free press where individuals have access to the dialogue will have a self-correcting quality. -- Al GoreExcerpt from NPR interview with Al Gore by Audie Cornish March 14, 2017. Heard on All Things Considered.
Mob Programming by Woody Zuill
[View the story "Dialogue on Prerequisites for Collaboration" on Storify]
[For the nature of confusion around this terms compare and contrast these: Agile Alliance Glossary; Six Sigma; KanbanTool.com; Lean Glossary.]
The team I'm working with had a toy basket ball goal over their Scrum board... like many cheep toys the rim broke. Someone bought a superior mini goal, it's a nice heavy quarter inch plastic board with a spring loaded rim - not a cheep toy. The team used "Command Strips" to mount it but they didn't hold for long.
The team convinced me there was a correlation between their basketball points on the charts and the teams sprint burndown chart. Not cause and effect, but correlation; have you ever stopped to think what that really means? Could it mean that something in the environment beyond your ability to measure is an actual cause to the effect you desire?
I asked the head person at the site for advice, how could we get the goal mounted in our area? He suggested that we didn't need permission, that the walls of the building were not national treasures - we should just mount it... maybe try some Command Strips. Yes, great minds... but what about getting fired after putting holes in the walls scares one from doing the right thing? How hard is it to explain to the Texas Work Force Commission when they ask why you were fired?
The leader understood that if I asked the building facilities manager that I might get denied - but if he asked for a favor... it would get done. That very day, Mike had the facilities manager looking at the board and the wall (a 15-20 minute conversation). Are you starting the clock? It's Dec 7th, lead time starts when Mike agreed to the team's request.
The team was excited, it looked like their desire was going to be granted. Productive would flourish again.
Over the next few days I would see various people looking up at the wall and down at the basketball goal on the floor. There were about 4 of these meetings each very short and not always the same people. Team members would come up to me afterwards and ask... "are we still getting the goal?"... "when are they going to bring a drill?"... "what's taking so long?"
Running the calendar forward a bit... Today the facilities guy showed up with a ladder and drill. It took about 20 minutes. Basketball goal mounted (Dec 13th) - which clock did you stop? All of the clocks stop when the customer (team) has their product (basketball goal) in production (a game commences).
I choose to think of lead time as the time it takes an agreed upon product or service order to be delivered. In this example that starts when Mike, the dude, agreed to help the team get their goal mounted.
In this situation I want to think of cycle time as the time that people worked to produce the product (mounted goal) - other's might call this process time (see Lean Glossary). And so I estimated the time that each meeting on the court looking at the unmounted goal took, plus the actual time to mount the goal (100 minutes). Technically cycle time is per unit of product - since in the software world we typically measure per story and each story is some what unique - it's not uncommon to drop the per unit aspect of cycle time.
Lead time: Dec 13th minus Dec 7th = 5 work days
Cycle time: hash marks //// (4) one for each meeting at the board to discuss mounting techniques (assume 20 m. each); and about 20 minutes with ladder and drill; total 100 minutes
Lead Time 5 days; Cycle Time 100 minutes
This lead to a conversation on the court - under the new goal with a few team members about what we could do with these measurements. How if one's job was to go around and install basketball goals for every team in the building that a cycle time of 100 minutes with a lead time of 5 days might make the customers a bit unhappy. Yet for a one off, unusual once a year sort of request that ratio of 100 minutes to 5 days was not such a bad response time. The customer's were very happy in the end, although waiting for 5 days did make them a bit edgy.
But now what would happen if we measured our software development cycle time and lead time - would our (business) customers be happy? Do we produce a once in a year product? (Well yes - we've yet to do a release.) Do our lead times have similar ratios to cycle time, with very little value add time (process time)?
Well it's January 5th and this example came up in a Scrum Master's Forum meeting. After telling the tale we still did not agree on when to start and stop the two watches for Lead Time and Cycle Time. Maybe this is much harder than I thought. Turns out I'm in the minority of opinions - I'm doing it wrong!
Could you help me figure out why my view point is wrong? Comment below, please.
LeanKit just published an article on this topic - it's very good but might also misinterpret cycle time. I see no 'per unit' in their definition of cycle time. The Lead Time and Cycle Time Debate: When Does the Clock Start? by Tommy Norman.
An Experiment in measuring the team's cycle time:
After a bit of time reflecting, debating, arguing with colleagues and other agilitst online I've decided to publish a little experiment in measuring cycle-time on a scrum team. Here's the data... what does it say? How do you think the team should react? What action should be next? What should the team's leadership feel/think/do?
The Story: This team has been working together for a while. The sprints are numbered from the start of the year... an interesting practice, this team uses 2 week sprints, is practicing Scrum. Took a nice holiday and required some priming to get back in the swing of things after the first of the year (you see this in the trend of stories completed each sprint). Cycle Time for a story on trend is longer than the sprint, this correlates with typical story "carry-over" (a story started is not finished in one sprint and is carried over to the next sprint). Generally a story is finished in the sprint but not in sequence or priority - they all take at least the full sprint to get to done. There is no correlation of story size to cycle time.
Now those are the facts more or less -- let us see what insights we might create from this cycle time info. With no correlation of story size to cycle time AND little consistency of number of stories finished in a sprint (trend of # of stories: 1, 6, 7, 2, 2). The question arrises - what is the controlling variable that is not being measured that effects the time it takes to get from start to finish with a story? Now that the team can see that the simplest things we could track do not have a strong effect on the length of time (or the through-put) a story requires... and that means the process is not under good control - we can start to look around for some of the uncontrolled (invisible factors) -- if we a courageous enough!
We reflected that many of the stories that carry over and are virtually unpredictable in size/time/effort appear to have large delays or multiple delays within their implementation phase. So we devised a quick and dirty way to track this delay. The assumption that this delay inherent in the work will perhaps be the unmeasured / uncontrolled variable that throws the correlation of story size with cycle-time out of kilter.
Our devised technique for tracking delay per story - a yellow dot on the task with a tick mark for every day the task is stuck in-process (delayed).
LeanKit published this excellent explanation of their choices in calculating cycle time within their tool: Kanban Calculations: How to Calculate Cycle Time by Daniel Vacanti.
LeanKit Lead Time Metrics: Why Weekends Matter
Elon Musk turns a tweet into reality in 6 days by Loic Le Meur
The ROI of Multiple Small Releaseshttps://en.wikipedia.org/wiki/Frank_Bunker_Gilbreth_Sr.
The Hummingbird Effect: How Galileo Invented Timekeeping and Forever Changed Modern Life
by Maria Popova. How the invisible hand of the clock powered the Industrial Revolution and sparked the Information Age.
This past week we’ve given away free online training and a number of resources to help you combat some of the most vexing problems agile teams encounter when writing user stories.
Now it’s time to open the doors to the full course: Better User Stories.
“In my 30 years of IT experience, this class has without question provided the most ‘bang for buck’ of any previous training course I have ever attended. If you or your organization are struggling with user stories, then this class is absolutely a must have. I simply can’t recommend it enough. 5 Stars!!” - Douglas Tooley
If you watched and enjoyed the free videos, you’ll love Better User Stories. It’s much more in-depth, with 9 modules of advanced training, worksheets, lesson transcripts, audio recordings, bonus materials, and quizzes to help cement the learning.
Registration for Better User Stories will only be open for one week
Because of the intense level of interest in this course, we’re expecting a large numbers of people to sign-up. That’s why we’re only opening the doors for one week, so that we have the time and resources to get everyone settled.
If demand is even higher than we expect, we may close the doors early, so if you already know you’re interested, the next step is to:
Choose one of 3 levels of access. Which is right for you?
I know when it comes to training, everyone has different needs, objectives, learning preferences and budgets.
That’s why you can choose from 3 levels of access when you register:
- Professional - Get the full course with lifetime access to all materials and any future upgrades
- Expert Access - Acquire the full course and become part of the Better User Stories online community, where you can discuss ideas, share tips and submit questions to live Q+A calls with Mike
- Work With Mike - Secure all of the above, plus private, 1:1 time with Mike to work through any specific issues or challenges.
What people are already saying
We recently finished a beta launch where a number of agilists worked through all 9 modules, providing feedback along the way. This let us tweak, polish and finish the course to make it even more practical and valuable.
Here’s what people had to say:
Thank you for an amazing course. Better User Stories is by far the best course I have had since I started my agile journey back in 2008.Anne Aaroe
Packed full of humor, stories, and exercises the course is easy to take at one’s own leisure. Mike Cohn has a way of covering complex topics such as splitting user stories with easy to understand acronyms, charts and reinforces these concepts with quizzes and homework that really bring the learning objectives to life. So, whether you’re practicing scrum or just looking to learn more about user stories this course will provide you the roadmap needed to improve at any experience level, at a cost that everyone can appreciate.Aaron Corcoran
Click here to read a full description of the course, and what you get with each of the 3 levels of access. Questions about the course?
Let me know in the comments below.
Today’s post introduces the third installment in a free series of training videos all about user stories. Available for a limited time only, you can watch all released videos by signing up to the Better User Stories Mini-Course. Already signed up? Check your inbox for a link to the latest video, or continue reading to find out about today’s lesson.
An extremely common problem with user stories is including the right amount of detail.
If you include too much detail in user stories this makes story writing take longer than it would otherwise. As with so many activities in the business world, we want to guard against spending more time on something than necessary.
Also, spending time adding too much detail leads to slower development as tasks like design, coding, and testing do not start until the details have been added. This delay also means it takes longer for the team and its product owner to get feedback from users and stakeholders.
But adding too little detail can lead to different but equally frustrating problems. Leave out detail and the team may struggle to fully implement a story during a sprint as they instead spend time seeking answers.
With too little detail, there’s also an increased chance the development team will go astray on a story by filling in the gaps with what they think is needed rather than asking for clarification.
There’s danger on both sides.
But, when you discover how much detail to add to your stories, it’s like Goldilocks finding the perfect bowl of porridge. Not too much, not too little, but just right.
But how do you discover how much is the right amount?
You can learn how in a new, 13-minute free video training I’ve just released. It’s part of the Better User Stories Mini-Course. To watch the free video, simply sign up here and you’ll get instant access.
Remember, if you’ve already signed up to the course you don’t need to sign in again, just go to www.betteruserstories.com and video #3 will already be unlocked for you.
Adding the right amount of detail--not too much, not too little--is one of the best ways to improve how your team works with user stories. I’m confident this new video will help.
P.S. This video is only going to be available for a very short period. I encourage you to watch it now at www.betteruserstories.com.
Seeing Theory Leaning Site"By 2030 students will be learning from robot teachers 10 times faster than today" by World Economic Forum.
This is what happens when humans debate ethics in front of a super intelligent learning AI.
TED Radio Hour : NPR : Open Source World Tim Berners-Lee tells the story of how Gopher killed it's user base progress and CERN declared their WWW open source in 1993 April 30th, insuring it would continue to prosper. And was it's growth exponential?
In the practice of Scrum many people appear to have their favorite method of calculating the team's velocity. For many, this exercise appears very academic. Yet when you get three people and ask them you will invariability get more answers than you have belly-buttons.
Velocity—the rate of change in the position of an object; a vector quantity, with both magnitude and direction. “Calculus is the mathematical study of change.” — Donald Latorre
This pamphlet describes the method I use to teach beginning teams this one very important Scrum concept via a photo journal simulation.
Some of the basic reasons many teams are "doing it wrong"... (from my comment on Doc Norton's FB question: Hey social media friends, I am curious to hear about dysfunctions on agile teams related to use of velocity. What have you seen?
- mgmt not understanding purpose of Velocity empirical measure;
- teams using some bogus statistical manipulation called an average without the understanding of the constrains that an average is valid within;
- SM allowing teams to carry over stories and get credit for multiple sprints within one measurement (lack of understanding of empirical);
- pressure to give "credit" for effort but zero results - culture dynamic viscous feedback loop;
- lack of understanding of the virtuous cycle that can be built with empirical measurement and understanding of trends;
- no action to embrace the virtuous benefits of a measure-respond-adapt model (specifically story slicing to appropriate size)
- breaking the basic tenants of the scrum estimation model - allow me to expand for those who have already condemned me for violating written (or suggesting unwritten) dogma...
- a PBL item has a "size" before being Ready (a gate action) for planning;
- the team adjusts the PBL item size any/ever time they touch the item and learn more about it (like at planning/grooming);
- each item is sized based on effort/etc. from NOW (or start of sprint - a point in time) to DONE (never on past sunk cost effort);
- empirical evidence and updated estimates are a good way to plan;
- therefore carryover stories are resized before being brought into the next sprint - also reprioritized - and crying over spilt milk or lost effort credit is not allowed in baseball (or sprint planning)
Day 1 - Sprint Planning
A simulated sprint plan with four stories is developed. The team forecast they will do 26 points in this sprint.
The team really gets to work.
Little progress is visible, concern starts to show.
Day 4Do you feel the sprint progress starting to slide out of control?
Day 5About one half of the schedule is spent, but only one story is done.
Day 6The team has started work on all four stories, will this amount of ‘WIP’ come back to hurt them?
Although two stories are now done, the time box is quickly expiring.
The team is mired in the largest story.
Day 9The output of the sprint is quite fuzzy. What will be done for the demo, what do we do with the partially completed work?
The Sprint Demo day. Three stories done (A, B, & D) get demoed to the PO and accepted.
Close the SprintCalculate the Velocity - a simple arithmetic sum.
Story C is resized given its known state and the effort to get it from here to done.
What is done with the unfinished story? It goes back into the backlog and is ordered and resized.
Backlog grooming (refinement) is done to prepare for the next sprint planning session.
Trophies of accomplishments help motivation and release planning. Yesterday’s weather (pattern) predicts the next sprints velocity.
Sprint 2 Begins with Sprint PlanningDay 1Three stories are selected by the team. Including the resized (now 8 points) story C.
Work begins on yet another sprint.
Work progresses on story tasks.
The cycles of days repeats and the next sprint completes.
Close Sprint 2Calculate the Velocity - a simple arithmetic sum.
In an alternative world we may do more complex calculus. But will it lead us to better predictability?
In this alternative world one wishes to receive partial credit for work attempted. Yet the story was resized based upon the known state and getting it to done.
Simplicity is the ultimate sophistication. — Leonardo di Vinci
Now let’s move from the empirical world of measurement and into the realm of lies.
Simply graphing the empirical results and using the human eye & mind to predict is more accurate than many peoples math.
Velocity is an optimistic measure. An early objective is to have a predictable team.
Velocity may be a good predictor of release duration. Yet it is always an optimistic predictor.
Variance Graphed: Pessimistic projection (red line) & optimistic projection (green line) of release duration.
While in the realm of fabrication of information — let’s better describe the summary average with it’s variance.
One of the most insidious obstacles to continuous delivery (and to continuous flow in software delivery generally) is the works-on-my-machine phenomenon. Anyone who has worked on a software development team or an infrastructure support team has experienced it. Anyone who works with such teams has heard the phrase spoken during (attempted) demos. The issue is so common there’s even a badge for it:
Perhaps you have earned this badge yourself. I have several. You should see my trophy room.
There’s a longstanding tradition on Agile teams that may have originated at ThoughtWorks around the turn of the century. It goes like this: When someone violates the ancient engineering principle, “Don’t do anything stupid on purpose,” they have to pay a penalty. The penalty might be to drop a dollar into the team snack jar, or something much worse (for an introverted technical type), like standing in front of the team and singing a song. To explain a failed demo with a glib “<shrug>Works on my machine!</shrug>” qualifies.
It may not be possible to avoid the problem in all situations. As Forrest Gump said…well, you know what he said. But we can minimize the problem by paying attention to a few obvious things. (Yes, I understand “obvious” is a word to be used advisedly.)Pitfall 1: Leftover configuration
Problem: Leftover configuration from previous work enables the code to work on the development environment (and maybe the test environment, too) while it fails on other environments.Pitfall 2: Development/test configuration differs from production
The solutions to this pitfall are so similar to those for Pitfall 1 that I’m going to group the two.
Solution (tl;dr): Don’t reuse environments.
Common situation: Many developers set up an environment they like on their laptop/desktop or on the team’s shared development environment. The environment grows from project to project, as more libraries are added and more configuration options are set. Sometimes the configurations conflict with one another, and teams/individuals often make manual configuration adjustments depending on which project is active at the moment. It doesn’t take long for the development configuration to become very different from the configuration of the target production environment. Libraries that are present on the development system may not exist on the production system. You may run your local tests assuming you’ve configured things the same as production only to discover later that you’ve been using a different version of a key library than the one in production. Subtle and unpredictable differences in behavior occur across development, test, and production environments. The situation creates challenges not only during development, but also during production support work when we’re trying to reproduce reported behavior.
Solution (long): Create an isolated, dedicated development environment for each project
There’s more than one practical approach. You can probably think of several. Here are a few possibilities:
- Provision a new VM (locally, on your machine) for each project. (I had to add “locally, on your machine” because I’ve learned that in many larger organizations, developers must jump through bureaucratic hoops to get access to a VM, and VMs are managed solely by a separate functional silo. Go figure.)
- Do your development in an isolated environment (including testing in the lower levels of the test automation pyramid), like Docker or similar.
- Do your development on a cloud-based development environment that is provisioned by the cloud provider when you define a new project.
- Set up your continuous integration (CI) pipeline to provision a fresh VM for each build/test run, to ensure nothing will be left over from the last build that might pollute the results of the current build.
- Set up your continuous delivery (CD) pipeline to provision a fresh execution environment for higher-level testing and for production, rather than promoting code and configuration files into an existing environment (for the same reason). Note that this approach also gives you the advantage of linting, style-checking, and validating the provisioning scripts in the normal course of a build/deploy cycle. Convenient.
All those options won’t be feasible for every conceivable platform or stack. Pick and choose, and roll your own as appropriate. In general, all these things are pretty easy to do if you’re working on Linux. All of them can be done for other *nix systems with some effort. Most of them are reasonably easy to do with Windows; the only issue there is licensing, and if your company has an enterprise license, you’re all set. For other platforms, such as IBM zOS or HP NonStop, expect to do some hand-rolling of tools.
Anything that’s feasible in your situation and that helps you isolate your development and test environments will be helpful. If you can’t do all these things in your situation, don’t worry about it. Just do what you can do.
If you’re working on a desktop, laptop, or shared development server running Linux, FreeBSD, Solaris, Windows, or OSX, then you’re in good shape. You can use virtualization software such as VirtualBox or VMware to stand up and tear down local VMs at will. For the less-mainstream platforms, you may have to build the virtualization tool from source.
One thing I usually recommend is that developers cultivate an attitude of laziness in themselves. Well, the right kind of laziness, that is. You shouldn’t feel perfectly happy provisioning a server manually more than once. Take the time during that first provisioning exercise to script the things you discover along the way. Then you won’t have to remember them and repeat the same mis-steps again. (Well, unless you enjoy that sort of thing, of course.)
For example, here are a few provisioning scripts that I’ve come up with when I needed to set up development environments. These are all based on Ubuntu Linux and written in Bash. I don’t know if they’ll help you, but they work on my machine.
- Provision a Node dev environment
- Provision a Python dev environment
- Provision a COBOL dev environment
- Provision a Java dev environment
- Provision a Ruby dev environment
If your company is running RedHat Linux in production, you’ll probably want to adjust these scripts to run on CentOS or Fedora, so that your development environments will be reasonably close to the target environments. No big deal.
If you want to be even lazier, you can use a tool like Vagrant to simplify the configuration definitions for your VMs.
One more thing: Whatever scripts you write and whatever definition files you write for provisioning tools, keep them under version control along with each project. Make sure whatever is in version control for a given project is everything necessary to work on that project…code, tests, documentation, scripts…everything. This is rather important, I think.
One way of isolating your development environment is to run it in a container. Most of the tools you’ll read about when you search for information about containers are really orchestration tools intended to help us manage multiple containers, typically in a production environment. For local development purposes, you really don’t need that much functionality. There are a couple of practical containers for this purpose:
Develop in the cloud
These are Linux-based. Whether it’s practical for you to containerize your development environment depends on what technologies you need. To containerize a development environment for another OS, such as Windows, may not be worth the effort over just running a full-blown VM. For other platforms, it’s probably impossible to containerize a development environment.
This is a relatively new option, and it’s feasible for a limited set of technologies. The advantage over building a local development environment is that you can stand up a fresh environment for each project, guaranteeing you won’t have any components or configuration settings left over from previous work. Here are a couple of options:
Generate test environments on the fly as part of your CI build
Expect to see these environments improve, and expect to see more players in this market. Check which technologies and languages are supported so see whether one of these will be a fit for your needs. Because of the rapid pace of change, there’s no sense in listing what’s available as of the date of this article.
Once you have a script that spins up a VM or configures a container, it’s easy to add it to your CI build. The advantage is that your tests will run on a pristine environment, with no chance of false positives due to leftover configuration from previous versions of the application or from other applications that had previously shared the same static test environment, or because of test data modified in a previous test run.
Many people have scripts that they’ve hacked up to simplify their lives, but they may not be suitable for unattended execution. Your scripts (or the tools you use to interpret declarative configuration specifications) have to be able to run without issuing any prompts (such as prompting for an administrator password). They also need to be idempotent (that is, it won’t do any harm to run them multiple times, in case of restarts). Any runtime values that must be provided to the script have to be obtainable by the script as it runs, and not require any manual “tweaking” prior to each run.
The idea of “generating an environment” may sound infeasible for some stacks. Take the suggestion broadly. For a Linux environment, it’s pretty common to create a VM whenever you need one. For other environments, you may not be able to do exactly that, but there may be some steps you can take based on the general notion of creating an environment on the fly.
For example, a team working on a CICS application on an IBM mainframe can define and start a CICS environment any time by running it as a standard job. In the early 1980s, we used to do that routinely. As the 1980s dragged on (and continued through the 1990s and 2000s, in some organizations), the world of corporate IT became increasingly bureaucratized until this capability was taken out of developers’ hands.
Strangely, as of 2017 very few development teams have the option to run their own CICS environments for experimentation, development, and initial testing. I say “strangely” because so many other aspects of our working lives have improved dramatically, while that aspect seems to have moved in retrograde. We don’t have such problems working on the front end of our applications, but when we move to the back end we fall through a sort of time warp.
From a purely technical point of view, there’s nothing to stop a development team from doing this. It qualifies as “generating an environment,” in my view. You can’t run a CICS system “in the cloud” or “on a VM” (at least, not as of 2017), but you can apply “cloud thinking” to the challenge of managing your resources.
Similarly, you can apply “cloud thinking” to other resources in your environment, as well. Use your imagination and creativity. Isn’t that why you chose this field of work, after all?Generate production environments on the fly as part of your CD pipeline
This suggestion is pretty much the same as the previous one, except that it occurs later in the CI/CD pipeline. Once you have some form of automated deployment in place, you can extend that process to include automatically spinning up VMs or automatically reloading and provisioning hardware servers as part of the deployment process. At that point, “deployment” really means creating and provisioning the target environment, as opposed to moving code into an existing environment.
This approach solves a number of problems beyond simple configuration differences. For instance, if a hacker has introduced anything to the production environment, rebuilding that environment out of source that you control eliminates that malware. People are discovering there’s value in rebuilding production machines and VMs frequently even if there are no changes to “deploy,” for that reason as well as to avoid “configuration drift” that occurs when we apply changes over time to a long-running instance.
Many organizations run Windows servers in production, mainly to support third-party packages that require that OS. An issue with deploying to an existing Windows server is that many applications require an installer to be present on the target instance. Generally, information security people frown on having installers available on any production instance. (FWIW, I agree with them.)
If you create a Windows VM or provision a Windows server on the fly from controlled sources, then you don’t need the installer once the provisioning is complete. You won’t re-install an application; if a change is necessary, you’ll rebuild the entire instance. You can prepare the environment before it’s accessible in production, and then delete any installers that were used to provision it. So, this approach addresses more than just the works-on-my-machine problem.
When it comes to back end systems like zOS, you won’t be spinning up your own CICS regions and LPARs for production deployment. The “cloud thinking” in that case is to have two identical production environments. Deployment then becomes a matter of switching traffic between the two environments, rather than migrating code. This makes it easier to implement production releases without impacting customers. It also helps alleviate the works-on-my-machine problem, as testing late in the delivery cycle occurs on a real production environment (even if customers aren’t pointed to it yet).
The usual objection to this is the cost (that is, fees paid to IBM) to support twin environments. This objection is usually raised by people who have not fully analyzed the costs of all the delay and rework inherent in doing things the “old way.”Pitfall 3: Unpleasant surprises when code is merged
Problem: Different teams and individuals handle code check-out and check-in in various ways. Some check out code once and modify it throughout the course of a project, possibly over a period of weeks or months. Others commit small changes frequently, updating their local copy and committing changes many times per day. Most teams fall somewhere between those extremes.
Generally, the longer you keep code checked out and the more changes you make to it, the greater the chances of a collision when you merge. It’s also likely that you will have forgotten exactly why you made every little change, and so will the other people who have modified the same chunks of code. Merges can be a hassle.
During these merge events, all other value-add work stops. Everyone is trying to figure out how to merge the changes. Tempers flare. Everyone can claim, accurately, that the system works on their machine.
Solution: A simple way to avoid this sort of thing is to commit small changes frequently, run the test suite with everyone’s changes in place, and deal with minor collisions quickly before memory fades. It’s substantially less stressful.
The best part is you don’t need any special tooling to do this. It’s just a question of self-discipline. On the other hand, it only takes one individual who keeps code checked out for a long time to mess everyone else up. Be aware of that, and kindly help your colleagues establish good habits.Pitfall 4: Integration errors discovered late
Problem: This problem is similar to Pitfall 3, but one level of abstraction higher. Even if a team commits small changes frequently and runs a comprehensive suite of automated tests with every commit, they may experience significant issues integrating their code with other components of the solution, or interacting with other applications in context.
The code may work on my machine, as well as on my team’s integration test environment, but as soon as we take the next step forward, all hell breaks loose.
Solution: There are a couple of solutions to this problem. The first is static code analysis. It’s becoming the norm for a continuous integration pipeline to include static code analysis as part of every build. This occurs before the code is compiled. Static code analysis tools examine the source code as text, looking for patterns that are known to result in integration errors (among other things).
Static code analysis can detect structural problems in the code such as cyclic dependencies and high cyclomatic complexity, as well as other basic problems like dead code and violations of coding standards that tend to increase cruft in a codebase. It’s just the sort of cruft that causes merge hassles, too.
A related suggestion is to take any warning-level errors from static code analysis tools and from compilers as real errors. Accumulating warning-level errors is a great way to end up with mysterious, unexpected behaviors at runtime.
The second solution is to integrate components and run automated integration test suites frequently. Set up the CI pipeline so that when all unit-level checks pass, then integration-level checks are executed automatically. Let failures at that level break the build, just as you do with the unit-level checks.
With these two methods, you can detect integration errors as early as possible in the delivery pipeline. The earlier you detect a problem, the easier it is to fix.Pitfall 5: Deployments are nightmarish all-night marathons
Problem: Circa 2017 it’s still common to find organizations where people have “release parties” whenever they deploy code to production. Release parties are just like all-night frat parties, only without the fun.
The problem is that the first time applications are executed in a production-like environment is when they are executed in the real production environment. Many issues only become visible when the team tries to deploy to production.
Of course, there’s no time or budget allocated for that. People working in a rush may get the system up and running somehow, but often at the cost of regressions that pop up later in the form of production support issues.
And it’s all because at each stage of the delivery pipeline, the system “worked on my machine,” whether a developer’s laptop, a shared test environment configured differently from production, or some other unreliable environment.
Solution: The solution is to configure every environment throughout the delivery pipeline as close to production as possible. The following are general guidelines that you may need to modify depending on local circumstances.
If you have a staging environment, rather than twin production environments, it should be configured with all internal interfaces live and external interfaces stubbed, mocked, or virtualized. Even if this is as far as you take the idea, it will probably eliminate the need for release parties. But if you can, it’s good to continue upstream in the pipeline, to reduce unexpected delays in promoting code along.
Test environments between development and staging should be running the same version of the OS and libraries as production. They should be isolated at the appropriate boundary based on the scope of testing to be performed.
At the beginning of the pipeline, if it’s possible develop on the same OS and same general configuration as production. It’s likely you will not have as much memory or as many processors as in the production environment. The development environment also will not have any live interfaces; all dependencies external to the application will be faked.
At a minimum, match the OS and release level to production as closely as you can. For instance, if you’ll be deploying to Windows Server 2016, then use a Windows Server 2016 VM to run your quick CI build and unit test suite. Windows Server 2016 is based on NT 10, so do your development work on Windows 10 because it’s also based on NT 10. Similarly, if the production environment is Windows Server 2008 R2 (based on NT 6.1) then develop on Windows 7 (also based on NT 6.1). You won’t be able to eliminate every single configuration difference, but you will be able to avoid the majority of incompatibilities.
Follow the same rule of thumb for Linux targets and development systems. For instance, if you will deploy to RHEL 7.3 (kernel version 3.10.x), then run unit tests on the same OS if possible. Otherwise, look for (or build) a version of CentOS based on the same kernel version as your production RHEL (don’t assume). At a minimum, run unit tests on a Linux distro based on the same kernel version as the target production instance. Do your development on CentOS or a Fedora-based distro to minimize inconsistencies with RHEL.
If you’re using a dynamic infrastructure management approach that includes building OS instances from source, then this problem becomes much easier to control. You can build your development, test, and production environments from the same sources, assuring version consistency throughout the delivery pipeline. But the reality is that very few organizations are managing infrastructure in this way as of 2017. It’s more likely that you’ll configure and provision OS instances based on a published ISO, and then install packages from a private or public repo. You’ll have to pay close attention to versions.
If you’re doing development work on your own laptop or desktop, and you’re using a cross-platform language (Ruby, Python, Java, etc.), you might think it doesn’t matter which OS you use. You might have a nice development stack on Windows or OSX (or whatever) that you’re comfortable with. Even so, it’s a good idea to spin up a local VM running an OS that’s closer to the production environment, just to avoid unexpected surprises.
For embedded development where the development processor is different from the target processor, include a compile step in your low-level TDD cycle with the compiler options set for the target platform. This can expose errors that don’t occur when you compile for the development platform. Sometimes the same version of the same library will exhibit different behaviors when executed on different processors.
Another suggestion for embedded development is to constrain your development environment to have the same memory limits and other resource constraints as the target platform. You can catch certain types of errors early by doing this.
For some of the older back end platforms, it’s possible to do development and unit testing off-platform for convenience. Fairly early in the delivery pipeline, you’ll want to upload your source to an environment on the target platform and buld/test there.
For instance, for a C++ application on, say, HP NonStop, it’s convenient to do TDD on whatever local environment you like (assuming that’s feasible for the type of application), using any compiler and a unit testing framework like CppUnit.
Similarly, it’s convenient to do COBOL development and unit testing on a Linux instance using GnuCOBOL; much faster and easier than using OEDIT on-platform for fine-grained TDD.
But in these cases the target execution environment is very different from the development environment. You’ll want to exercise the code on-platform early in the delivery pipeline to eliminate works-on-my-machine surprises.Summary
The author’s observation is that the works-on-my-machine problem is one of the leading causes of developer stress and lost time. The author further observes that the main cause of the works-on-my-machine problem is differences in configuration across development, test, and production environments.
The basic advice is to avoid configuration differences to the extent possible. Take pains to ensure all environments are as similar to production as is practical. Pay attention to OS kernel versions, library versions, API versions, compiler versions, and the versions of any home-grown utilities and libraries. When differences can’t be avoided, then make note of them and treat them as risks. Wrap them in test cases to provide early warning of any issues.
The second suggestion is to automate as much testing as possible at different levels of abstraction, merge code frequently, build the application frequently, run the automated test suites frequently, deploy frequently, and (where feasible) build the execution environment frequently. This will help you detect problems early, while the most recent changes are still fresh in your mind, and while the issues are still minor.
Let’s fix the world so that the next generation of software developers doesn’t understand the phrase, “Works on my machine.”
Today’s post introduces the second installment in a free series of training videos all about user stories. Available for a limited time only, you can watch all released videos by signing up to the Better User Stories Mini-Course. Already signed up? Check your inbox for a link to the latest video, or continue reading to find out about today’s lesson.
One of the most common struggles faced by agile teams is the need to split user stories. I'm sure you've struggled with this. I certainly did at first.
In fact, when I first began using Scrum, some of our product backlog items were so big that we occasionally opted for six-week sprints. With a bit more experience, though, that team and I saw enough ways to split work that we could have done one-day sprints if we'd wanted.
But splitting stories was hard at first. Really hard.
But I've got some good news for you. Not only have I figured out how to split stories on my own, I've learned how to explain how to do it so that anyone can quickly become an expert.
What I discovered is that almost every story can be split with one of five techniques. Learn those five simple techniques and you're set.
Even better, the five techniques form an easily memorable acronym: SPIDR.
I've just released a new, 20-minute, free video training that describes each of these techniques as part of the Better User Stories Mini-Course. To watch it simply sign up here and you’ll get instant access.
Remember, if you’ve already signed up to the course you don’t need to sign in again, just check your inbox for an email from me with a link to the latest lesson.
Unless you've already cracked the code on splitting stories, you definitely want to learn the five techniques that make up the SPIDR approach by watching this free video training.
P.S. This video is only going to be available for a very short period. I encourage you to watch it now at https://www.betteruserstories.com.
Has a new era of enablement reached the hockey stick curve of exponential growth? I think it has. I've been picking up this vibe, and I may not be the first to sense things around me. I've got some feedback that I very poor at it in the personal sphere. However, on a larger scale, on an abstract level, in the field of tech phenomena I've got a bit of a streak going. Mind you I'm not rich on a Zuckerberg level... and my general problem is actualizing the idea as apposed to just having the brilliant idea - or recognizing the opportunity.
A colleague told me I would like this tinker's Dash Button hack. It uses the little hardware IoT button Amazon built to sell more laundry soap - a bit of imaginative thinking outside of the supply chain problem domain and a few hours of coding. Repurposing the giant AWS Cloud Mainframe, that the Matrix Architect has designed to enslave you, to give the ACLU a Fiver ($5) every time you feel like one of the talking heads (#45) in Washington DC has infringed upon one of you civil liberties.
Now I think this is the power of a true IoT the fact that an enabling technology could allow the emergent property that was not conceived of in it's design. No one has really tried to solve the problem of the democrat voice of the people. We use the power of currency to proxy for so many concepts in our society, and it appears that the SCOTUS has accepted that currency and it's usage is a from of speech (although not free - do you see what I did there?). What would the Architect of our Matrix learn if he/she/it could collect all the thoughts of people when they had a visceral reaction to an event correlate that reaction to the event, measure the power of the reaction over a vast sample of the population and feed that reaction into the decision making process via a stream of funding for or against a proposed policy. Now real power of this feedback system will occur when the feedback message may mutate the proposal (the power of Yes/AND).
I can see this as enabling real trend toward democracy - and of course this disrupts the incumbent power structure of the representative government (federal republic). Imagine a hack-a-thon where all the political organizations and the charities and the religions came together in a convention center. There are tables and spaces and boxes upon boxes of Amazon Dashes Buttons. We ask the organizations what they like about getting a Fiver every time the talking head mouths off, and what data they may also need to capture to make the value stream most effective in their unique organization. And we build and test this into a eco-system on top of the AWS Cloud.
"You know, if one person, just one person does it they may think he's really sick and they won't take him."What would it take to set this up one weekend... I've found that I'm not a leader. I don't get a lot of followers when I have an idea... but I have found that I can make one heck of a good first-follower!
"And three people do it, three, can you imagine, three people walking in singin a bar of Alice's Restaurant and walking out. They may think it's an organization. And can you, can you imagine fifty people a day, I said fifty people a day walking in singin a bar of Alice's Restaurant and walking out. And friends they may thinks it's a movement."I will just through this out here and allow the reader to link up the possibilities.
- A Dash Button that submits a Fiver to a select charity
- The TX bill: A Man's Right to Know ($100 fine for masturbation)
- Funding of Planned Parenthood by whatever means necessary.
- Playboy pivots it's business model away from nude pictures
GitHub Repo Donation Button by Nathan Pryor
Instructables Dash Button projects
Coder Turns Amazon Dash Button Into ACLU Donation Tool by Mary Emily O'Hara
Life With The Dash Button: Good Design For Amazon, Bad Design For Everyone Else by Mark WilsonHow to start a movement - Derek Sivers TED Talk
Today I want to let you know about a new mini-course I created to help overcome some of the common and challenging problems with user stories.
It’s free to register and you can access the first video instantly, or watch it a little later at your convenience. Once you do sign-up I’ll also send you an email to let you know as soon as the next video is released.
Please note: This training is free but will only be available for the next 2 weeks
Last year I did a survey to discover what challenges were stopping people write successful user stories. Nearly 2,000 people got in touch to highlight the following issues:
- Not writing stories that truly focus on the user’s needs
- Wondering how to keep a team engaged from writing to development
- Splitting stories quickly without compromising value
- Not knowing when to add detail, or how much to include
Plus many, many more. I wanted to create a mini-course that would tackle some of these issues, and I wanted to offer it to you for free.
Even though there’s no fee to access the videos, the training isn’t light-touch, an introduction, or theory-filled. It’s based on practical materials I’ve used for teaching user stories to more than 20,000 people over the last fifteen years. What’s more, you’ll also have the chance to comment, ask questions and discuss the training featured in each video.
To go alongside the launch of the mini-course, over the next couple of weeks, both the blog and weekly tips email will feature lessons and advice on how to write better user stories.
And if you really want you and your team to master this topic, there will be an option to unlock more in-depth, advanced training (details about that coming soon).
Today, get instant access to video 1: Three Tips for Successful Story Mapping in a Story-Writing Workshop
The first video is available now. This 20 minute training looks at some of the common mistakes people make at the early stage of writing user stories, particularly when conducting a story-writing workshop.
In this video you’ll learn:
- Why people struggle to find the balance between too much, and too little team engagement when writing user stories.
- How to save a significant amount of time in future iteration planning by inviting the right people to your story-writing workshop
- A simple, but powerful method of visualizing the relationship between stories
- Practical ways to make sure your team focuses on the user’s needs at all times
- Methods to help you prioritize and plan stories, fast
Questions about the training? Already watched the first video? I’d love to hear from you in the comments below.
Jeff Sutherland points the fickle finger of fate at Ken Schwaber for starting this fable:
I've hated having to tell teams this joke... the lore of the Scrum pig and chicken is so pervasive that before long someone is going to call someone else a chicken (or a pig)... and then you have to tell the joke to help that person retain face... it can be quite uncomfortable for me.
I think my disdain for this joke has to do with two of American's least favorite farm animals being featured. We call people chickens to say they have little courage. We call people a pig to insult their appearance (clothing choices, weight, manners). Had the joke featured a cat and dog... it would be so different - wouldn't it?
Now Jake it appears has taken this joke metaphor to a new level... good job Jake!
Some fun videos about Agile & ScrumScrum cartoons and fictional stories - a list
Scrum Pig and Chicken - part 1 by Jake Calabrese
Organizational Commitment: Pig and Chicken – Part 2 by Jake Calabrese
Does Your Culture Require Your Demise - Pig & Chicken part 3 by Jake Calabrese
If you’re familiar with our model of organizational transformation, then you know we’re fond of the metaphor of taking a journey in a specific direction, possibly (but not necessarily) ending up at the farthest imaginable point of that journey. We think of the journey as a series of expeditions, each of which aims to fulfill a portion of a vision and plan.
The metaphor is both spatial and temporal. When you picture a group of adventurers embarking on an expedition, the visualization is mainly spatial: They are marching across territory toward a goal that lies on the horizon. The horizon moves ahead of them as they march. Their concept of “the possible” depends on what they are able to see or imagine from their current position and, as they progress, they are able to see and imagine more and more possibilities.
A way forward based on Lean principles involves conducting a series of experiments. Learnings from each experiment inform the design of the next experiment. Always, there’s a goal in mind. Over time, outcomes meet needs more and more effectively. Improvement over time suggests a temporal angle on the “journey” metaphor.Step-by-step Improvement Over Time
It’s easy to find examples of similar journeys that suggest change over time. One that I find relevant, particularly in larger, well-established IT organizations, is the tale of the Eddystone Lighthouse. You can read about it on Wikipedia. There are also many videos on YouTube about the lighthouse, and it has been featured on the Science Channel program, “Impossible Engineering.”
I see this as an example of a temporal journey of improvement because of the progression of engineering advancements reflected in the series of lighthouses built on the site from 1699 to the present. Similarly, improvement in organizational performance often involves building a series of solutions that incrementally move closer to strategic goals.
…it turns out to be faster, cheaper, and better to find the way forward through a series of experiments than to design the ultimate solution in advance.Lighthouses and expeditions
It’s easy to get tangled up in a sea of metaphors. Even referring to the situation as a “sea” could be one metaphor too many, were it not for the fact we’re also talking about lighthouses.
This lighthouse at Eddystone was the the first to be built in the middle of the sea and was erected on a rock that which was submerged 20 hours a day. Over the course of history, it was rebuilt four different times, each quite different from its predecessors. Engineers had learned things and materials science had progressed, enabling each successive lighthouse to be better than those that had stood before.
The same pattern occurs in organizational transformation. A Scrum team on a journey from Basecamp 2 to Basecamp 3 will use the Scrum events and artifacts quite differently than a team progressing from zero to Basecamp 1. The more mature team will use Scrum in a lighter-weight fashion than the novice team. For example, they have learned how to level out their work by crafting User Stories of roughly the same size. They’re on their way to dispensing with story-level sizing. Meanwhile, the novice team may still be struggling with separating the notion of size from the notion of time, and they may have difficulty visualizing the possibility that story-level estimation is a crutch that can be made unnecessary by mastering other practices.
Also, the organization surrounding the two teams will be at different levels of proficiency with lightweight methods. You’ll often hear us speak of clarity around the backlog, or words to that effect. An expedition approaching Basecamp 3 will have learned skills in identifying worthwhile initiatives, prioritizing those initiatives, and refining backlogs that are sensible and actionable by program and delivery teams.
It’s more feasible for the delivery teams in the Basecamp 3 expedition to function in a lightweight way than for the novice teams, which are supported by organizations still early on the learning curve, still struggling to reach Basecamp 1. They may not receive actionable backlog items on a consistent basis. Everyone is trying to get a handle on quite a few unfamiliar concepts and methods. Even an advanced team would have challenges in maintaining flow and delivering value without appropriate support from the program and portfolio teams.
The two organizations just can’t build the same kinds of lighthouses. They have to advance one step at a time.Why not just determine the final solution through research?
Sometimes, people are uncomfortable with this approach. They would prefer it if we could design the “final” solution in advance and then simply implement it. That way, they would have only one sizeable capital investment to make, and they could check the “improvement” box. All done!An aside: This mentality may be at the root of the numerous attempts to “implement” a framework, such as SAFe or LeSS, and lock it in as the “final state.” Although the proponents of such frameworks are consistent in saying they are meant to be a starting point for ongoing improvement, people tend to try and “implement” a framework as if it were a “solution.” Are they hoping for a magic bullet?
The “implementation” approach may be feasible for relatively small enterprises that have fairly narrowly-bounded goals. When a larger enterprise, that has longstanding habits and entrenched processes, sets a goal to “be more effective” or “be more competitive” or “improve the customer experience” or “be able to pivot quickly,” it’s harder to visualize a Golden End State in a vacuum. Such goals are real and meaningful, but difficult to quantify, and the path to achieving them in the face of an ever-changing competitive landscape is not easy to discern.
Perhaps counterintuitively, it turns out to be faster, cheaper, and better to find the way forward through a series of experiments than to design the ultimate solution in advance. It takes less time and less money to build something, learn from it, discard it, and build another (repeating the sequence several times) than it does to learn all the possibilities and pitfalls of numerous options in advance through “research.” This has been a practical reality for a long time, far longer than the buzzword agile has been in use.That pesky moving horizon
Now you may be asking, “If you’ve seen this pattern before and you know what to expect, why don’t you just tell us what we need to do to be at Basecamp 5? Let’s start Monday!”
That would be great. Unfortunately, things don’t seem to work that way. Combining the experiences of the LeadingAgile consultants, we’ve seen that approach many times in many kinds of organizations. We’ve tried starting with culture change; with procedural change; with technical practices. We’ve tried driving change top-down; bottom-up; by consensus or invitation; by management dictate. What’s common in those cases is that when people are told what to do, the desired change in mentality doesn’t happen. When people are invited to change their thinking, they simply don’t know how. People remain in the mindset of following orders. The only difference is they’re following different orders than before. The changes don’t penetrate deeply, and they aren’t sticky. People become frustrated with the results, and abandon the effort to change.
It seems to be important that people deeply understand the why of the change. To become aware of some of the possibilities is a good first step, but it isn’t sufficient to create meaningful and lasting improvement. People need to be able to get their heads around the potential benefits and risks of any given change. For that to be possible, they need guidance beyond the limits of their comfort zone…but not too far beyond those limits. Very radical change, introduced suddenly, will only lead to fear and frustration. The only way to reach Basecamp 5 is to walk there, step by step.
Remember the bit about the horizon moving ahead of you? It does. At the outset of the journey, you don’t have enough information to visualize possible end states. There may even be so much organizational “fog” that you can’t really tell which way to turn. The best you can do is set a direction that seems to be consistent with your goals. Then you have to take a deep breath and start walking, pausing to check your compass frequently and adjusting course accordingly.
Maybe the first few lighthouses you build will burn down or be swept away by the sea (or be destroyed by Napoleon’s army, as the case may be), but eventually you’ll build one that nothing and no one can tear down. The key is to be willing to try things that don’t turn out exactly the way you hoped, and learn from those experiences. Just keep going. As long as you have a good compass, you won’t get lost.
I believe the number one reason for failure or waste is a lack of clarity or understanding. If you get clarity on something, it gives you the freedom to decide if you want to do it or not. If something is ambiguous, you may agree in principle but you don’t know what you’re really getting yourself into.OKRs
Firstly, what are your Objectives and Key Results (OKR)? How do you set and communicate goals and results in your organization? Because you want people to move together in the right direction, you need to get clarity.KPIs
What are your Key Performance Indicators (KPI)? How do you want to measure value that demonstrates how effectively your company is achieving key business objectives? Because you want your organization to evaluate its success at reaching targets, you need to get clarity.Structure
What does the team design or structure of the organization look like on portfolio, program, product, and service layers? We need a shared understanding of which individuals or teams are responsible for what.Governance
What does the governance of the organization look like? How do we manage our budget, dependencies, risks, or quality? What are the inputs, outputs, and artifacts?Metrics and Tools
Because we want to manage our system of delivery, what are necessary metrics and tools of the organization?Get Clarity
Remember, if you expect others to commit to something, regardless if it’s a process or a deliverable, we need a shared understanding.
Well I thought I'd try to open the kimono to see if it helps me...
I've studied Psychometric assessments and some I find useful, some I feel are just a step to the left from astrology charting. Yet might not be harmful for self reflection. I've also found that it takes an expert to explain the tools and reports such that a layperson can understand and make positive use of the assessment and it's report. And while I've been "certified" is some of these tools/technique I do not practice them enough to be competent - and my pitch is akin to a snake-oil salesman.
Here is my DiSC Classic profile:
DiSC Classic by WileyHere is my Trimetric assessment (DiSC, EQ, Motivation) by Abelson Group
DiSC WheelMotivators Wheel
Emotional Quotient Wheel
Here is my Myers Briggs Type Indicator - Level II assessment:
MBTI Level OneMBTI Level II reports
Here is my EQ 2.0 - Emotional Intelligence:
EQ 2.0 by
TalentSmart, Inc. Here is my Action & Influence report:
Here is my Personalysis assessment:
See Also:Authentic Happiness - resources in Positive Physiology - 20 assessmentsMartin Seligman TED talk on Positive Physiology
Personalisis assessment ReviewedMBTI Level II assessment Reviewed
Psychometric testing resources
British Psychological Society’s Psychological Testing Centre (PTC) provides information and services relating to standards in tests and testing for test takers, test users, test developers and members of the public.
National Cultural Studies - assessments at the meta level - the personality and behaviors of nations.
In just two months our newly delivering Scrum team had put into production the "undoable" feature - BAM! - value delivered, trust confirmed, transformation successful.
"My light bulb moment was during the product demo in the Sprint Review Meeting, when the state of Washington Appellate Clerk of Court told me he and the courts had been waiting 20 years for the feature that our team had just delivered. In just two months our newly delivering Scrum team had put into production the "undoable" feature - BAM! - value delivered, trust confirmed, transformation successful. He later sent me the requirement spec for the 20-year-old feature and it read just like our epic story and its children we discovered. Yes, this was a completely different system than the previous retired system - yet it had the same customer needs. We had transitioned from a deadlocked in analysis paralysis development group to a Scrum team in under 3 months, delivering into production every month new features, bug fixes, and tested working software." -- David Koontz
See other Light Bulb Moments at Sliger Consulting Light Bulb Moments
Have you seen in other collections of Light Bulb Moments? Please comment below.
So naturally the conversation went something like this:
Inquisitive person: "Hi David, what's an Agile Transition Guide? Is that like a coach?"
David: "Hi, glad you asked. What does a coach do in your experience?"
Inquisitive person: "They help people and teams improve their software practices."
David: "Yes, I do that also."
Inquisitive person: "Oh, well then why don't you call yourself a coach?"
David: "Great question: Let's see... well one of the foundational principles of coaching (ICF) is that the coached asks for and desires an interaction with the coach, there is no authority assigning the relationship, or the tasks of coaching. So do you see why I don't call myself a coach?"
Inquisitive person: "Well no, not really. That's just semantics. So you're not a coach... OK, but what's is a guide?"
David: "Have you ever been fishing with a guide, or been whitewater rafting with a guide, or been on a tour with a guide? What do they do differently than a coach? Did you get to choose your guide, or were they assigned to your group?"
Inquisitive person: "Oh, yeah. I've been trout fishing with a guide, they were very helpful, we caught a lot of fish, and had more fun than going on our own. They also had some great gear and lots of local knowledge of where to find the trout."
David: "Well, there you have it... that's a guide - an expert, a person that has years of experience, has techniques to share and increase your JOY with a new experience."
Inquisitive person: "Yes, I'm starting to see that difference, but can't a coach do this also?"
David: "No, not unless the coach is willing to switch to a different modality - to one of mentoring, teaching, consulting, or protecting. Some times a guide must take over for the participant and keep the person/group within the bounds of safety - think about a whitewater river guide. A coach - by strict interpretation of the ethics, is not allowed to protect the person from their own decisions (even if there are foreseen consequence of this action."
And now the conversation start to get very interesting, the Whys start to flow and we can go down the various paths to understanding. See Richard Feynman's dialogue about "Why questions"
So, I'm not a Coach
I've been hired as a coach (largely because the organization didn't truly understand the label, role, and the ethics of coaching). This relationship was typically dysfunctional from the standpoint of being a coach. So I decide to study the role of coaching. I've done a few classes, seminars, personal one of one coach, read a lot and drawn some conclusions from my study - I'm not good a coaching within the environment and situation that Agile Coaches are hired. I've learned that regardless of the title that an organization uses (Agile Coach, Scrum Master, etc.) it doesn't mean coaching. It intends the relationship to be vastly different. Since I'm very techie, I appreciate using the correct words, and phrase for a concept. (Paraphrasing Phil Karlton: In software there are two major challenges: cache invalidation and naming things. Two Hard Things)
So to stop the confusing and the absurd use of the terms, I quit referring to my role and skills as coaching. Then I needed a new term. And having lots of friends that have been Outward Bound instructors and understanding their roles, the concept of a river guide appeals to me in this Agile transformational role. Therefore I coin the term Agile Transformation Guide. But many organization do not wish to transform their organization, but they do wish for some type of transition, perhaps from tradition development to a more agile or lean mindset. So a transition guide is more generic, capable of the situational awareness of the desire of the organization.
What does a guide really do?
This question may best be answered by David Kelley in his TED talk, "How to build your creative confidence." In this talk David points out his desire to teach parents that there are not two types of children - the creative and the non-creative. There are, however, children that lost their desire to express their unique talents early in their lives. He helps people regain this capability.
It is much like how Dr Bandura has developed his treatment for phobias. David will tell you about this basic guided mastery technique that restores self efficacy.
This is what an Agile Transition Guide does... they guide you on a journey toward self efficacy via many techniques in mastery of your domain skills and capabilities.
Six Kinds of Agile Coaches by Ravi Verma Describes the HUGeB coach, the one to be.
So what is a Coach and What is a Trainer - Agile 102
Where Agile goes to Die - Dave Nicolette - about those companies that are late adopters or laggards in the innovation curve and the challenges that "coaches" have when engaging with them.
The Difference Between Coaching & Mentoring
Scrum Master vs Scrum Coach by Charles Bradley
Agile Coach -or- Transition Guide to Agility by David Koontz; the whitewater guide analogy to agile coaching.
Academic paper: Coaching in an Agile Context by David Koontz
What is the ROI of Agile Coaching - Payton Consulting
Interesting Twitter conversation about the nature of "coaching" with Agile42 group.
"When told that they are empowered to do something; this message is actually interrupted to dis-empower the persons agency."How does this misinterpretation occur? Why do we humans mess up this simple act of communication?
Let's look at an example:
For a few months I had been working with a new team of software developers at a large organization. Like many organizations they had already done the agile/scrum thing and it didn't work for them. Recently the leadership had built a satellite office and started from a very small pool of tenured people to grow it's new "resource" of technical people. This time the leadership decided that hiring some experienced people that had "done Agile" and "knew how to Scrum" might give them the needed energy to actually get somewhere with the initiative. At least these experts could teach the new people how to do agile right. I guess I was one of these "experts" (another term for a has-been drip under pressure).
Observing the new team for a few weeks I noticed they referred to their process by the label "kanban," yet they never appeared to move any sticky notes on their board, never made new ones or retired any old stickies. Mostly they just pointed at them and talked about something not written on the note. It was very difficult for the outsider (me) to follow the process they were using -- or maybe they were not using any process; and I was following them -- to nowhere. This will take a bit more observation.
Although that was several months ago, and my memory is not the best at recovering details when there is no emotion overlaying the details, believe me there was little emotion at their stand up meetings, I'd call them boring (the meetings, not the people). I don't remember in the 4 weeks I was observing that they ever shipped any software, never spoke about a customer visit, or discussing a solution with a customer - I don't think anything they talked about ever got done.
So, I some how convinced their manager that what they were calling a process, could not be named - and that wasn't a good thing (sorry Alexander that attribute is not the same as your "quality without a name" ). It didn't reflect any known process. He didn't know much about the process either. It was labeled "kanban." Yet they didn't exhibit any of the behaviors of a team practicing the Kanban process, they didn't even know what steps the process might involve. They had also tried scrum, but "it didn't work" either. It was very difficult to discuss these failures with the team or the manager, they were reluctant to discuss what about the process had failed, nor what actions they implemented when these failures occurred.
I made a bold assumption - they didn't know anything about the processes they were espousing they were using. They had been to training classes, therefore they knew ... something. They could use the new lexicon in a sentence (90% of the time the sentences made sense). But how do you tell someone they are ignorant (with the full meaning - that they no nothing about a subject and it's not their fault for having never been exposed properly to the knowledge). That's a crucial conversation. I rarely handle these well - I got lucky this time perhaps. I suggested the team join me in a workshop to talk about the practices they are using and how these map to the Agile Manifesto. We did this exercise and branched off into many valuable conversations. During this exercise we decided they were already being Agile, so many of their practices supported the principles of the manifesto. So the question was not if they were Agile, but how much was enough... could they improve their agility - did they want to try something different? Along the way of this conversation we might have arrived at an understanding of a difference of opinion, when I used words in the lexicon I had intended to imply certain meanings that they did not intend when they used the words. We often seemed to use similar phrases but rarely meant the same things actually happened. That level of miscommunication can be tedious to overcome, while still keeping an open mind that the other person has something valuable to offer.
For example: they had been using the word "kanban" to describe the process they were using because that was the term applied to the Rational Team Concert (RTC) - template work flow the company created. They had chosen that workflow because it was easier to use than the complicated scrum workflow the organization's PMO created. Turns out it had nothing to do with the development process they were using. They finally agreed that they were not doing Scrum, and didn't really know how to do it... they hadn't learned much from the powerpoint presentation (imagine that).
I got extremely lucky with one of the leaders of this team. She said to the team, that she thought the
team should give the scrum master (me) a chance, just go along with whatever I said, regardless of how stupid it sounded. Try it for a few weeks, it wouldn't hurt, and then in a few weeks decide if it was working for the team, or not. I learned of this leader's suggestion to her teammates only months later. It was without a doubt the turning point in the relationship. After this détente, the team members began to implement with ease suggestions on how they could implement Scrum. One might say that this leader empowered me, but never said a word about it to me.
We did more workshops in a scrumy fashion, we had a board of items to complete. We tracked these items on a board right there in the workshop space. Sometimes we split the topics up more.
Sometimes the topic didn't get finished in the time allotted and we had to decide if it was good enough to continue with other topics and come back later to finish the discovered aspects. We used the rate at which we were progressing day after day to predict that we wouldn't get all of the topics covered by the end of the week. But that was good enough, because each day the team selected the most important, most valuable topics and we put off the lease valuable. Sometimes a topic was dependent upon another item on the board and we had to cover some of a less valuable item so that the dependency was resolved. In these workshops we covered many Agile principles, the Scrum process framework (3 roles, 3 meetings, 3 artifacts, and a lot more), engineering practices (many originally defined by Extreme Programming gurus), local organization customs, terms, policies, and procedures. Much of what was suggested by some agile or scrum nutjob was in contradiction with the customs and policies of the organization - at least on the surface. Great conversations were developed where the team joined into filling the shared pool of knowledge. This pool of knowledge now with company and agile/scrum knowledge was easily sorted into new understanding of how both systems could co-exist and interrelate. It wasn't easy but it generally worked.
The team started understanding the process of Scrum and working toward getting stories in the backlog to done. Slicing stories that had proven too large in the past and delivering working software to the business each sprint. They developed the ability to easily estimate a story or an epic set of stories within minutes. Their ability to read their task board and predict which stories (if any) were not going to get completed within their sprint time-box that they quit wasting time tracking a sprint task burndown. They understood that if they got into a new domain that ability might be diminished and they could easily revert to tracking task aspirin (a unit of effort; not time) on a chart in the future. The team knew their velocity and could accomplish a sprint planning session in about an hour. They could predict when they needed to spend more time in refining tough stories before planning and they learned how to slice stories for value and leave the fluff on the cutting room floor.
But all was not well with a performing team... (cue the scary music - set up a scene with dramatic lighting) ... the manager was looking for a way to measure the team. And as people are wonting to do... without any thought they look for a dashboard to tell them how well the "team is being run." They want to know if the "team is being driven at their top performance", and they need some numbers to prove it. Generally this is a warning indication that many conversations were wasted and no learning occurred, in hindsight the wrong person was doing too much of the talking and the other didn't draw from the pool of shared knowledge but instead just admired the pool from the shore, never bothering to enter. The team's manager wanted me to build a dashboard tool using the company's tool of record (RTC) that would give him all the numbers that prove his team is performing well.
I've made a strategic decision over the years to not become the tooling expert - especially with the bountiful assortment of tools the software project management industry offers. Needless to say I didn't want to become an expert in RTC (a tool rumored to be on it's way out for this organization that was in their 3rd Agile adoptions curve). I asked what this dashboard would have on it, what it would display, etc.? The answer fit on a sticky note, because that's what I had with me... something like velocity, the backlog, and what the team is currently working on was the manager's response.
Here's my Nth mistake.... I hoped the request would dissipate as many thing in a transition tend to do, so I wasn't motivated to create a dashboard for the manager that would reproduce the team's well maintained Scrum task board. I offered to work with him in reading the board, he attended many of the team's Scrum sessions at the board, rarely engaged but appeared attentive.
[this story will continue ... as I've lost my round-toit -- wonder if it's with my marbles?]
The Rise of Emergent Organizations by Beth Comstock
The ScrumMaster - How to develop a team - by Marcel van Hove
Bill & Groundhog
Well this happened about ten years ago, and about 6 years ago, or maybe it was 4 years past, and seems like we did this about 24 months ago... or it could be today!
The Agile Transition Initiative at the company has come upon an inflection point (do ya' know what that is... have you read Tipping Point?). I'm not exactly sure of it's very precise date... but Feb. 2nd would be the perfect timing. The inflection has to do with which direction your Agile Transition Initiative takes from this point into the future. Will it continue on it's stated mission to "transform" the organization? Or will it stall out and revert slowly to the status quo?
How do I recognize this perilous point in the agile trajectory? Well there are several indications. But first we must digress.
[We must Digress.]
Punxsutawney Phil Says more Winter in 2017In this story we will use the germ theory as a metaphor. Germ theory came about in about ... (wait - you guess - go ahead ... I'll give you a hundred year window... guess...). That's right! "The germ theory was proposed by Girolamo Fracastoro in 1546, and expanded upon by Marcus von Plenciz in 1762." Wow, we've know about these little buggers for a long time. And we started washing our hands ... (when... correct -again). "The year was 1846, and our would-be hero was a Hungarian doctor named Ignaz Semmelweis." So right away business (society) started using a new discovery - a better way to treat patients.... or well it took a while maybe a few months, or maybe more than 300 years.
But back to the metaphor - in this metaphor the organization will be like a human body and the change initiative will take the roll of a germ. The germ is a change introduced to the body by some mechanism we are not very concerned with - maybe the body rubbed up against another body. I hear that's a good way to spread knowledge.
We are interested in the body's natural process when a new factor is introduced. What does a body do? Well at first it just ignores this new thing - heck it's only one or two little germs, can't hurt anything - (there are a shit load of germs in your body right now). But the germs are there to make a home - they consume energy and reproduce (at this point lets call it a virus - meh - what the difference?). So the virus reproduces rapidly and starts to cause ripples... the body notices this and starts to react. It sends in the white-blood cells - with anti-bodies. Now I don't understand the biological responses - but I could learn all about it... but this is a metaphor and the creator of a metaphor may have artistic license to bend the truth a bit to make the point. Point - WHAT IS THE POINT?
The point is the body (or organization) will have a natural reaction to the virus (change initiative) and when the body recognizes this change it's reaction (natural - maybe call it subconscious - involuntary). Well let's just say it's been observed multiple times - the body tries very hard to rid itself of the unwanted bug (change). It may go to unbelievable acts to get rid of it - like tossing all it's cookies back up - or squirting all it's incoming energy into the waste pit. It could even launch a complete shutdown of all communication to a limb and allow it to fester and die, hopefully to fall off and not kill the complete organism. Regaining the status quo is in the fundamental wiring of the human body. Anything that challenges that stasis requires great energy to overcome this fundamental defense mechanism.
[Pop the stack.]
So back to the indicators of the tipping point in agile transitions. Let's see if our metaphor helps us to see these indications. The tossing of cookies - check. That could be new people hired to help with the change are just tossed back out of the organization. The squirts - check. That is tenured people that have gotten on board with the change being challenged by others to just water it down... make it look like the things we use to do. Heck let's even re-brand some of those new terms with our meanings - customized for our unique situation - that only we have ever seen, and therefore only we can know the solutions. Folks, this is called the Bull Shit Reaction.
Now imagine a limb of the organization that has adopted the new way - they have caught the virus. There is a high likely hood that someone in the organization is looking at them a "special". A bit jealous of their new status and will start hoarding information flow from that successful group. Now true that group was special - they attempted early transition and have had (in this organizations realm) success. Yet there was some exception to normal business process that made that success possible. How could we possibly reproduce that special circumstance across the whole org-chart? Maybe we just spin them off and let them go it alone - good luck, now back to business.
What's a MIND to do with this virus ridden body and all these natural reactions?
Well we are at an inflection point... what will you do?
Which curve do you want to be on? - by Trail Ridge Consulting
[What Should You Do?]
Say you are in the office of VP of some such important silo, and they are introducing themselves to you (they are new at the Org.). They ask you how it's going. You reply, well, very well. [That was the appropriate social response wasn't it?] Then they say, no - how's the agile transformation going? BOOM! That is a bit of a shocking first question in a get to know each other session - or is it that type of session - what should you do?
I will skip to the option I chose ... because the other options are for crap - unless you have a different motive than I do... and that is a very real possibility, if so defiantly DON'T DO THIS:
Ask the VP if this is a safe space where you can tell the truth? Be sincere and concerned - then listen. There response is the direction you must now take, you have ceded control of your action to them, listen and listen to what is not said - decide if they want the truth or do they want to be placated. Then give them what the desire. For example (an obviously easy example - perhaps); imagine that the VP said: I want the truth, you should always tell the truth.
Don't jump to fast to telling the truth... how can you ascertain how much of the truth they can handle? You should defiantly have an image of Nicholson as Colonel Nathan R. Jessep as he addresses the Court on "Code Red".
You might ask about their style is it bold and blunt or soft and relationship focused. You could study their DiSC profile to see what their nature may tell you about how to deliver the truth.
Imagine you determine that they want it blunt (I've found that given a choice must people say this, and only 75% are fibbing). So you suggest that it's not going well. The transformation has come to an inflection point (pause to see if they understand that term). You give some archeology - the organization has tried to do an agile transformation X times before. VP is right with you, "and we wouldn't be trying again if those had succeeded." Now that was a nice hors d'oeuvre, savory. The main course is served - VP ask why?
Now you could offer you opinion, deliver some fun anecdote or two or 17, refer to some data, write a white paper, give them a Let Me Google That For You link. Or you could propose that they find the answer themselves.
Here's how that might go down: Ask them to round up between 8.75 and 19.33 of the most open minded tenured (5 - 20 yrs) people up and down the hierarchy; testers, developers, delivery managers, directors, administrators (always include them - they are key to this process - cause they know every thing that has happened for the last 20 years). Invite them to join the VP in a half day discovery task - to find out why this Agile thing get's ejected before it takes hold of our organization. If you come away from this workshop with anything other than - culture at the root of the issue, then congratulations your organization is unique. Try the Journey Line technique with the group. It's a respective of the organizations multi-year, multi-attempts to do ONE THING, multiple times. Yes, kinda like Groundhog Day.
The Fleas in the Jar Experiment. Who Kills Innovation? The Jar, The Fleas or Both? by WHATSTHEPONT