Skip to content

Feed aggregator

Generative power of Not-Knowing -or- Utopia


Polish Poet and Nobel Laureate Wisława Szymborska on How Our Certitudes Keep Us Small and the Generative Power of Not-Knowing“Whatever inspiration is, it’s born from a continuous ‘I don’t know.’”
By Maria Popova  - BrainPickings
Purely for the fun of it, Maria Popova drew Wisława Szymborska’s poetic island in a map inspired by Thomas More’s Utopia.

Polish poet Wisława Szymborska (July 2, 1923–February 1, 2012) "explored how our contracting compulsion for knowing can lead us astray in her sublime 1976 poem “Utopia,” found in her Map: Collected and Last Poems (public library)" -- Maria Popova

UTOPIA
Island where all becomes clear.Solid ground beneath your feet.The only roads are those that offer access.Bushes bend beneath the weight of proofs.The Tree of Valid Supposition grows here
with branches disentangled since time immemorial.The Tree of Understanding, dazzlingly straight and simple,
sprouts by the spring called Now I Get It.The thicker the woods, the vaster the vista:
the Valley of Obviously.If any doubts arise, the wind dispels them instantly.Echoes stir unsummoned
and eagerly explain all the secrets of the worlds.On the right a cave where Meaning lies.On the left the Lake of Deep Conviction.
Truth breaks from the bottom and bobs to the surface.Unshakable Confidence towers over the valley.
Its peak offers an excellent view of the Essence of Things.For all its charms, the island is uninhabited,
and the faint footprints scattered on its beaches
turn without exception to the sea.As if all you can do here is leave
and plunge, never to return, into the depths.Into unfathomable life.
Categories: Blogs

AgilePath Podcast Up

Johanna Rothman - Mon, 03/27/2017 - 18:42

I’ve said before that agile is a cultural change, not merely a project management framework or approach. One of the big changes is around transparency and safety.

We need safety to experiment. We need safety to be transparent. Creating that safe environment can be difficult for everyone involved.

John LeDrew has started a new podcast, agilepath.fm. I had the pleasure of chatting with John for the podcast. He wove a story with several other interviewers and it’s now up, In search of Safety.

I hope you enjoy it.

Categories: Blogs

End to End Kanban for the Whole Organization

Scrum Expert - Mon, 03/27/2017 - 15:47
If shorter release cycle could be considered as a success for Agile software development teams, they might be considered as an issue if the other parts of the organization are not ready to handle...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

7 Sins of Scrum and other Agile Antipatterns

Scrum Expert - Thu, 03/23/2017 - 22:01
This is about agile “anti-patterns”: “something that looks like a good idea, but which backfires badly when applied” (Coplien). The presenter has been around Agile development from before it was...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Favro Will Spin-Off from Hansoft

Scrum Expert - Thu, 03/23/2017 - 18:35
Favro, a cloud-based planning and collaboration app for agile businesses, has announced plans to separate from Hansoft, a project management software firm. The separation of the two businesses will...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

An Introduction to Monoids

About SCRUM - Hamid Shojaee Axosoft - Thu, 03/23/2017 - 17:30

When you think of programming, you might not immediately think of mathematics. In the day-to-day practice of writing software it’s often hard to see much theory behind REST requests and database schema migrations.

There is, however, a rich world of theory that applies to the work we do, from basic data structures to architectural patterns. This is the field of category theory, sometimes described as “the mathematics of mathematics”. Category theory is concerned with structure and the ways of converting between structures.

NOTE: This article is based on an Axosoft Dev Talk I recently gave, also titled Practical Category Theory: Monoids. Watch that video or keep reading!

Hopefully this sounds like the kind of thing you’re familiar with as a programmer, and if it doesn’t, hopefully it just sounds like a good ol’ time. After all, category theory doesn’t care about specifics; it’s abstract, which helps it be applicable to a great many fields.

Software therefore needs to apply the concepts of category theory to match the realities of programs that execute on real computers. You may have heard of some of these, especially if you’ve looked into statically typed functional programming languages such as Haskell, OCaml, F#, or Scala.

Semigroups

In talking about Monoids, we actually need to talk about two structures: the Semigroup and the Monoid. A Monoid is a superset of a Semigroup, so let’s start there. The Semigroup is a simple structure that has to do with combining. In the following example, I’ve combined two strings with an operator such as +:

helloWorld = “Hello” + “world”

And in this example, I’ve created a concatenating list:

parts = concat([“Parsley”, “Sage”, “Rosemary”], [“Thyme”])

Both these examples have some similar properties. They take two values of a certain shape and combine them into a single value of the same shape. We don’t take two strings and combine them to get a number or an array; it’s quite reasonable to expect the result remain a string.

Let’s start with just this single property, that there is some “append” operation that takes in two values of some type and gives back a combined result of that same type. In various languages this might look like:

C#
T Append(T first, T second)
Javascript with Flow
append(first : T, second : T) : T
Haskell, Elm, PureScript
append :: t -> t -> t

This may not seem like a particularly novel thing to do, after all, you probably do this sort of operation all the time on various kinds of data. Remember though, that we’re talking about a concept from category theory, so while the above type signatures are the end of the story in terms of what that compiler can enforce, we are obligated to obey a few more rules.

These rules are “things that must remain true,” and in math we call these laws. You’re already familiar with tons of these laws even if you haven’t heard them referred to as such. One example is with addition. When adding two numbers you can freely swap the order of the arguments, so 1 + 2 is the same as 2 + 1. This is the commutative property, and the law here is that swapping the order doesn’t affect the resulting value.

For Semigroups, we must obey the associative property, which has the law that the order in which you combine things does not matter. That is

(“hello” + “ “) + “world”

is the same as

“hello” + (“ “ + “world”)

As you can see this doesn’t mean we can swap the order, just that we can do our combine operation on any two adjacent elements, and when we’re all done we’ll end up with the same result.

The append operation is whatever you want it to be (as long as it obeys the rules)

Consider CSS classes. They’re kind of like a string, except when you combine them you don’t just shove them together; it’s more like we’re doing a join operation using a comma and space as the separator:

“app” + “button” + “selected”

should turn out to be

“app, button, selected”

CSS classes aren’t the only type where this is true; if you think in terms of Semigroups, there are lots of cases where fundamentally we’re combining two things of the same type, and that type has a unique rule around how to do the combination.

That’s actually all there is to Semigroup, an append operation that is associative. If you’re feeling underwhelmed, don’t feel bad! I was hardly excited when I first encountered the idea. Let’s dig a bit deeper to see how this unassuming structure punches quite a bit above its weight class.

For the purposes of grounding our discussion, let’s pick a few types that satisfy the rules of Semigroup. List is a straightforward data structure that every mainstream language supports (even if it’s named something else), so we’ll talk in terms of Lists moving forward.

Depending on the language you’re using, you might be able to express Semigroup as an interface or protocol, and you might be able to implement that interface/protocol on your language’s List type. If not, don’t fret! These ideas are still useful and applicable even if your language doesn’t give you a way to represent it in your type system (or if you’re using a dynamic language and don’t have a static type system). I’ll use square brackets to represent a list and the ++ operator for my List’s “append” operation.

bandMembers = [“Stevie”, “Lindsay”] ++ [“Christine”, “John”, “Mick”]

The second type we’ll consider is the String. For our discussion, we’ll use the same ++ operator to concatenate Strings.

greeting = “Hello” ++ “ “ ++ “World”
Interesting attribute #1: Semigroups are partition friendly

Consider the following SQL related code:

databaseQuery = selection ++ table ++ constraints

At the end of the day, we want to have a nice query we can use to get a result from the database. If our top level value is a String that makes up our query, then the associativity of Semigroups means that we have a lot of flexibility in building this up.

One example of this might be in building up the selection. We want to end up with something like:

select * from

or if we want specific columns:

select a, b from

It seems reasonable to build up this String from some fixed parts, the “select” and “from” mixed with some customizable parts, the “*” or “a, b”.

selection = “select “ ++ ”a, b” ++ “ from “

The resulting String will be combined with the other elements of the query, but it doesn’t matter when we decide to combine the elements, we could defer the column name conversion to a String by leaving it as a List.

selectionParts = [“select ”, [“a”, “b”], “ from”]

When we’re ready for a selection String, we could combine the column names first, then combine the resulting String with the other two parts. This same flexibility in when to go from a more structured form into a final string, is present in other parts of the query as well.

table = “myTableName”
constraintParts = [“where “, constraint1, “ AND “, constraint2]
constraint1 = [columnName, “ IS “, value]
constraint2 = [columnName2, “ NOT “, value2]

As developers, we can decide when it’s best to go from the representation of values in Lists to a final String form—either for a part or for the whole of our query. We can defer this decision through the layers of functions used to build up the query, and we know we can go from parts to whole for any section of the query—independent of any other part, and nothing will change.

We originally had:

databaseQuery = selection ++ table ++ constraints

Let’s see what the query might look like if we had all of the bits inlined:

databaseQuery = “select “ ++ “a, b” ++ “ from “ ++ “myTableName” ++ “ where “ ++ columnName ++ “ IS “ ++ value ++ “ AND “ ++ columnName2 ++ “ NOT “ ++ value2

If we were to recreate the order of operations in the original version, it would require us to group our values into 3 parts:

databaseQuery = (“select “ ++ “a, b” ++ “ from “) ++ (“myTableName”) ++ (“ where “ ++ columnName ++ “ IS “ ++ value ++ “ AND “ ++ columnName2 ++ “ NOT “ ++ value2)

But because of associativity, these are exactly the same. We can group the individual combinations however we want, which is effectively what we would be doing along the way by collapsing the various query parts into selection, table, and constraints.

This might seem obvious (after all we’re dealing with Strings), but the ability to arbitrarily partition your data and combine it, is useful for more exotic types as well. Imagine you have a type Log that represents a single logging event and is a Semigroup. You might have files filled with Logs spread across files that are rotated, grouped by date or time range, or some other bucketing criteria. Being a Semigroup means you can combine adjacent Logs into an aggregate Log in any order you want. You could split up the work of combining Logs across multiple threads and know that it is still 100% safe to combine the individual thread’s results into a final result.

Interesting attribute #2: Semigroups are “incremental combination” friendly

Carrying on with the idea of Logs, consider a remote logging service that accepts Logs from your various servers and aggregates them down into something you can do reporting on. If we still have a Log type that is a Semigroup, we have a lot of freedom in how we proceed. For example, when a Log is generated by one of your servers, it could send it immediately, or combine some number of Logs before sending them:

Remote Logging Server Worker Server Log <-worker sends Log immediately– Log Log ++ Log Log Remote Logging Server Worker Server Local Batching Log Log –> Log Log Log Log –> Log Log Log ++ Log …repeat until n logs or time limit reached… Log <-worker sends batched Log– Log

On the receiving end, you could accept the Logs and combine them with the main log as they arrive, or you could batch them:

Remote Logging Server Worker Servers Log <– Log Log ++ Log Remote Logging Server Local Batching Worker Servers Log <– Log Log Log Log Log <– Log Log Log ++ Log …repeat until n Logs or time limit reached… Log <– Log Log ++ Log Monoids

Ok, that’s a lot of things we can do with Semigroups, so where do Monoids fit in? Thankfully, Monoids are, by comparison, very simple. They are everything that a Semigroup is, plus an “empty” value of the type.

For Strings, this would be an empty string; for Lists it would be an empty List; for Logs… well, that’s slightly less clear. If you can define a concept of an empty value for your type though, then congratulations, your type is a Monoid. The types signatures for this sort of thing look like this:

C#
T Empty
Javascript with Flow
empty : T
Haskell, PureScript, Elm
empty :: t

You guessed it! There are laws that go along with Monoids, and this time there are two: the left and right identity. These state that combining the empty element with any value shouldn’t change the meaning of the value. That is, the empty element is an identity value for that type. Looking at the String we see that

“” ++ “some string”

is the same as

“some string”

This works on either the left or right side (hence the left/right identity):

“some string” ++ “”

is the same as

“some string”

This comes in handy when you want to append values but you don’t want to distinguish between having zero and having one already. Consider the case of the remote logging service:

Remote Logging Server Worker Server Log <-worker sends Log immediately– Log Log ++ Log

Where did that Log that the Remote Logging Service starts with come from? If Log is a Semigroup then we’d need to generate it somehow. However, if Log is a Monoid, we have our answer: the initial Log value on the Remote Logging Server is the “empty” value of Log. Since combining the empty value with a Log doesn’t change the Log, this is safe to assume, and now we can deal exclusively in terms of Log ++ Log operations. Less special cases means happier developers™.

There are lots more fun use cases for Monoids. One example is taking a Monoidal type in a list and running it through a fold/reduce/Aggregate operation, where the fold function is appended and the initial value is empty. You could write a specialized fold that only works for Lists of Monoids but can aggregate them with no other information needed.

Go forth and aggregate!

For something so simple as having an append operation, an empty value, and the associative property, I hope you’ll agree there’s a lot of depth to Semigroups and Monoids. This is just the tip of the categorical iceberg though. Due to its abstract nature, category theory can describe a variety of structures, and best yet, be supported by lots of formal reasoning (aka proofs).

This formal reasoning can give us a lot of confidence that what we’re doing really will work as long as we’re implementing our types in accordance with the laws of the structure. If you’re looking for next steps, Functor is a mild step up in complexity, but it’s equally as rich in terms of applications and reach.

Categories: Companies

Cycle Time and Lead Time

Our organization is starting to talk about measuring Cycle Time and Lead Time on our software engineering stories.  It's just an observation, but few people seem to understand these measurement concepts, but everyone is talking about them.  This is a bad omen...  wish I could help illustrate these terms.  Because I doubt the measurements will be very accurate if the community doesn't understand when to start the clock, and just as important - when to stop it.

[For the nature of confusion around this terms compare and contrast these:  Agile Alliance Glossary; Six Sigma; KanbanTool.com; Lean Glossary.]

The team I'm working with had a toy basket ball goal over their Scrum board...  like many cheep toys the rim broke.  Someone bought a superior mini goal, it's a nice heavy quarter inch plastic board with a spring loaded rim - not a cheep toy.  The team used "Command Strips" to mount it but they didn't hold for long.

The team convinced me there was a correlation between their basketball points on the charts and the teams sprint burndown chart.  Not cause and effect, but correlation; have you ever stopped to think what that really means?  Could it mean that something in the environment beyond your ability to measure is an actual cause to the effect you desire?

I asked the head person at the site for advice, how could we get the goal mounted in our area?  He suggested that we didn't need permission, that the walls of the building were not national treasures - we should just mount it... maybe try some Command Strips.  Yes, great minds...  but what about getting fired after putting holes in the walls scares one from doing the right thing?  How hard is it to explain to the Texas Work Force Commission when they ask why you were fired?

The leader understood that if I asked the building facilities manager that I might get denied - but if he asked for a favor... it would get done.  That very day, Mike had the facilities manager looking at the board and the wall (a 15-20 minute conversation).  Are you starting the clock?  It's Dec 7th, lead time starts when Mike agreed to the team's request.

The team was excited, it looked like their desire was going to be granted.  Productive would flourish again.

Over the next few days I would see various people looking up at the wall and down at the basketball goal on the floor.  There were about 4 of these meetings each very short and not always the same people.  Team members would come up to me afterwards and ask...  "are we still getting the goal?"... "when are they going to bring a drill?"...  "what's taking so long?"

Running the calendar forward a bit... Today the facilities guy showed up with a ladder and drill.  It took about 20 minutes.  Basketball goal mounted (Dec 13th) - which clock did you stop?  All of the clocks stop when the customer (team) has their product (basketball goal) in production (a game commences).

I choose to think of lead time as the time it takes an agreed upon product or service order to be delivered.  In this example that starts when Mike, the dude, agreed to help the team get their goal mounted.

In this situation I want to think of cycle time as the time that people worked to produce the product (mounted goal) - other's might call this process time (see Lean Glossary).  And so I estimated the time that each meeting on the court looking at the unmounted goal took, plus the actual time to mount  the goal (100 minutes).  Technically cycle time is per unit of product - since in the software world we typically measure per story and each story is some what unique - it's not uncommon to drop the per unit aspect of cycle time.

Lead time:  Dec 13th minus Dec 7th = 5 work days
Cycle time:  hash marks //// (4)  one for each meeting at the board to discuss mounting techniques (assume 20 m. each); and about 20 minutes with ladder and drill;  total 100 minutes

Lead Time 5 days; Cycle Time 100 minutes
This lead to a conversation on the court - under the new goal with a few team members about what we could do with these measurements.  How if one's job was to go around and install basketball goals for every team in the building that a cycle time of 100 minutes with a lead time of 5 days might make the customers a bit unhappy.   Yet for a one off, unusual once a year sort of request that ratio of 100 minutes to 5 days was not such a bad response time.  The customer's were very happy in the end, although waiting for 5 days did make them a bit edgy.

But now what would happen if we measured our software development cycle time and lead time - would our (business) customers be happy?  Do we produce a once in a year product? (Well yes - we've yet to do a release.) Do our lead times have similar ratios to cycle time, with very little value add time (process time)?

Pondering...

Well it's January 5th and this example came up in a Scrum Master's Forum meeting.  After telling the tale we still did not agree on when to start and stop the two watches for Lead Time and Cycle Time.  Maybe this is much harder than I thought.  Turns out I'm in the minority of opinions - I'm doing it wrong!

Could you help me figure out why my view point is wrong?  Comment below, please.

LeanKit just published an article on this topic - it's very good but might also misinterpret cycle time.  I see no 'per unit' in their definition of cycle time.  The Lead Time and Cycle Time Debate: When Does the Clock Start? by Tommy Norman.

An Experiment in measuring the team's cycle time:
After a bit of time reflecting, debating, arguing with colleagues and other agilitst online I've decided to publish a little experiment in measuring cycle-time on a scrum team.  Here's the data... what does it say?  How do you think the team should react?  What action should be next?  What should the team's leadership feel/think/do?

The Story:  This team has been working together for a while.  The sprints are numbered from the start of the year... an interesting practice, this team uses 2 week sprints, is practicing Scrum.  Took a nice holiday and required some priming to get back in the swing of things after the first of the year (you see this in the trend of stories completed each sprint).  Cycle Time for a story on trend is longer than the sprint, this correlates with typical story "carry-over" (a story started is not finished in one sprint and is carried over to the next sprint).  Generally a story is finished in the sprint but not in sequence or priority - they all take at least the full sprint to get to done.  There is no correlation of story size to cycle time.

Now those are the facts more or less -- let us see what insights we might create from this cycle time info.  With no correlation of story size to cycle time AND little consistency of number of stories finished in a sprint (trend of # of stories: 1, 6, 7, 2, 2). The question arrises - what is the controlling variable that is not being measured that effects the time it takes to get from start to finish with a story?  Now that the team can see that the simplest things we could track do not have a strong effect on the length of time (or the through-put) a story requires... and that means the process is not under good control - we can start to look around for some of the uncontrolled (invisible factors) -- if we a courageous enough!

We reflected that many of the stories that carry over and are virtually unpredictable in size/time/effort appear to have large delays or multiple delays within their implementation phase.  So we devised a quick and dirty way to track this delay.  The assumption that this delay inherent in the work will perhaps be the unmeasured / uncontrolled variable that throws the correlation of story size with cycle-time out of kilter.

Our devised technique for tracking delay per story - a yellow dot on the task with a tick mark for every day the task is stuck in-process (delayed).





See Also:

LeanKit published this excellent explanation of their choices in calculating cycle time within their tool:  Kanban Calculations: How to Calculate Cycle Time by Daniel Vacanti.
LeanKit Lead Time Metrics: Why Weekends Matter
Elon Musk turns a tweet into reality in 6 days by Loic Le Meur
The ROI of Multiple Small Releaseshttps://en.wikipedia.org/wiki/Frank_Bunker_Gilbreth_Sr.
https://en.wikipedia.org/wiki/Cheaper_by_the_Dozen


The Hummingbird Effect: How Galileo Invented Timekeeping and Forever Changed Modern Life
by Maria Popova.  How the invisible hand of the clock powered the Industrial Revolution and sparked the Information Age.

Categories: Blogs

ProjectManagementParadise Podcast Up

Johanna Rothman - Wed, 03/22/2017 - 17:51

Johnny Beirne over at projectmanagementparadise podcast interviewed me, specifically about my post, Visualize Your Work So You Can Say No. See Episode 38: “How to visualise your work so that you can say no” with Johanna Rothman.

Hope you enjoy it!

Categories: Blogs

April Fools Ideas for Techies

About SCRUM - Hamid Shojaee Axosoft - Wed, 03/22/2017 - 17:28

I am not much of an April Fool’s Day fan. Not everyone can achieve the giddy heights of, say, the BBC’s oldie but goodie prank on its viewers 60 years ago, and attempts to do so can be sadly ineffective. Nonetheless, April 1st is almost upon us, and as we welcome Spring back into our lives, we must also endure the one day that our workplace prankster (we’ll call him Tad) comes into his or her own.

It doesn’t have to be this way. You can strike at Tad first, as long as you have a few ideas up your sleeve. Here are some classic and alternative ways to make sure that Tad—and his silly unmatching socks and zany ties—don’t mess with you in the future.

WARNING: The following pranks can result in being:

  1. Tame to the point of failure: Prank is so tame it may genuinely go unnoticed, forever, with no consequences.
  2. Pretty underwhelming: Tad can easily shake it off.
  3. Mildly-to-actually annoying: This is the level you’re shooting for. These pranks hopefully disrupt Tad’s flow for long enough to give him a taste of his own medicine.
  4. Instant dismissal: Tad’s a real bore, but that doesn’t make it a good idea to pants him in the cafeteria or toss his laptop over the side of the balcony.
  5. Arrest: This includes but is not limited to: murder (murder is not a prank), violence, aggressive nudity, arson.

Ok, let’s get this out of the way so we can carry on enjoying our lives!

NOTE: A couple of these pranks are aimed at MacOS users, although they’re likely achievable on other platforms. They also require access to Tad’s work machine. To gain the element of surprise, consider a diversion tactic of your choice, or implement one or more of these pranks early (or even better, in August). 1. The low hanging fruit: Chrome Extensions

A good prank needn’t take the week to plan, and for the half-hearted pranksters out there, most of the heavy lifting has been done for you through the gift of Chrome extensions. It turns out there are a lot of good ones out there, but here are a few to get your creative juices flowing:

  • April First Prank Toolkit: You’ll be shocked to hear that April First Prank Toolkit is a toolkit specifically aimed at pranking people on April first. It has a bunch of options for easy-yet-effective prankage; some pranks are more subtle than others (and therefore, IMO, more effective as slow-burner pranks). These include hiding the cursor and randomly reloading tabs–actions that make it less immediately obvious that your victim has fallen fowl to a dastardly prank!
  • Prank ‘Em: Another toolkit; this one allows for subtle irritations such as a flash on the screen. Every setting has a frequency slider too, so you can make things happen intermittently for better covert prankular tactics.
  • Cenafy: This does one thing and one thing only (1/100 of the time): Gives you John Cena. It’s infrequent enough to be another winning slow-burner.
2. Keyboard typing replacements MacOS only

This one is short and sweet. Go to System Preferences > Keyboard and click the Text tab. Click the + symbol. From there you can replace ‘the’ with ‘teh’, for example.

NOTE: This replacement doesn’t work on every application, so you may have to be patient to see the fruits of your labors.

3. Application Overload MacOS only

Let’s get a bit more serious. In this prank, we’re going to create an AppleScript application that launches applications in a way that Tad isn’t expecting. When he launches Word, he expects Word to launch just once! Classic Tad. Let’s say you want to launch 20 instances of Word instead. Here goes:

  1. Open Script Editor (found in Applications Utilities)
  2. Select New Document
  3. Add the following script:
    on run
        set wordPath to "/Applications/Microsoft Office 2011/Microsoft Word.app"
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
    end run
  4. What does that do? Well, it assigns the path of Word to a variable. We can then run a shell script to open that path and activate the app. Now, lines 3 and 4 are crucial, because we’re going to call them again—19 more times (or however many you want to do:
    on run
        set wordPath to "/Applications/Microsoft Office 2011/Microsoft Word.app"
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
    end run
  5. Go to File Save
  6. Choose File Format: Application and call it “Microsoft Word”. Save it in a discrete location

  1. Hit Save
  2. You now have an app that will run Word 20 times when opened. But it still looks like a script!

  1. Find the REAL MS Word, right-click and select Get Info
  2. Click the application icon in the top-right and hit command + c
  3. Find your script, get info and click the script’s icon
  4. Hit command + v
  5. You now have an app that looks just like Word. Replace the actual Word icon in the dock and you’re set! This can be adapted to suit your needs. Maybe only have it launch Word twice—that’s enough to be a mild yet persistent annoyance. Or, have it open a nice selection of other applications.

4. Grunt Task Tomfoolery

This one’s a tad (ha!) obscure, but might be fun. WARNING: It could also be a fireable offense if your prank somehow makes it to production. You have been warned!

Let’s say Tad is working on a bunch of projects (typical Tad), and he uses Grunt to automate his tasks and build his websites. Open up one of his projects in the terminal and type:

npm install grunt-string-replace

Now open up Tad’s Gruntfile and add the task:

grunt.loadNpmTasks('grunt-string-replace');

In his build tasks, add 'string-replace' to the array of tasks.

Next, define that task! In the following example we’re focusing on the build folder, and it’s last in our list of tasks (this ensures that our hard work isn’t overwritten by subsequent tasks). We’re searching for all html files and replacing ‘the’ with ‘teh’ using a regular expression (defining a string as a pattern will only change the first instance).

'string-replace': {
    dist: {
        files: [{
            expand: true,
            cwd: 'build/',
            src: '**/*.html',
            dest: 'build/'
        }], 
        options: {
            replacements: [{
                pattern: /the /g,
                replacement: 'teh '
            }]
        }
    }
}

That’s it. Sit back and wait for Tad to run a build task. Of course, you could get more inventive with the replacements and perhaps only target one or two files. However, most importantly, remember that I never, ever, recommended you do this to anyone. Ever. Don’t do it.

Categories: Companies

Doors Now Open to the Better User Stories Advanced Video Training

This blog post refers to a four-part series of videos on overcoming challenges with user stories. Topics covered are conducting story-writing workshops with story maps, splitting stories, and achieving the right level of detail in user stories.

To be notified when you the videos are again available, sign up below:

Notify Me!

This past week we’ve given away free online training and a number of resources to help you combat some of the most vexing problems agile teams encounter when writing user stories.

Now it’s time to open the doors to the full course: Better User Stories.

“In my 30 years of IT experience, this class has without question provided the most ‘bang for buck’ of any previous training course I have ever attended. If you or your organization are struggling with user stories, then this class is absolutely a must have. I simply can’t recommend it enough. 5 Stars!!” - Douglas Tooley

If you watched and enjoyed the free videos, you’ll love Better User Stories. It’s much more in-depth, with 9 modules of advanced training, worksheets, lesson transcripts, audio recordings, bonus materials, and quizzes to help cement the learning.

Registration for Better User Stories will only be open for one week

Click here to read more about the course and reserve your seat.

Because of the intense level of interest in this course, we’re expecting a large numbers of people to sign-up. That’s why we’re only opening the doors for one week, so that we have the time and resources to get everyone settled.

If demand is even higher than we expect, we may close the doors early, so if you already know you’re interested, the next step is to:

Choose one of 3 levels of access. Which is right for you?

I know when it comes to training, everyone has different needs, objectives, learning preferences and budgets.

That’s why you can choose from 3 levels of access when you register:

  • Professional - Get the full course with lifetime access to all materials and any future upgrades
  • Expert Access - Acquire the full course and become part of the Better User Stories online community, where you can discuss ideas, share tips and submit questions to live Q+A calls with Mike
  • Work With Mike - Secure all of the above, plus private, 1:1 time with Mike to work through any specific issues or challenges.

Click here to choose the best level for your situation

What people are already saying

We recently finished a beta launch where a number of agilists worked through all 9 modules, providing feedback along the way. This let us tweak, polish and finish the course to make it even more practical and valuable.

Here’s what people had to say:

Anne Aaroe

Thank you for an amazing course. Better User Stories is by far the best course I have had since I started my agile journey back in 2008.

Anne Aaroe

Packed full of humor, stories, and exercises the course is easy to take at one’s own leisure. Mike Cohn has a way of covering complex topics such as splitting user stories with easy to understand acronyms, charts and reinforces these concepts with quizzes and homework that really bring the learning objectives to life. So, whether you’re practicing scrum or just looking to learn more about user stories this course will provide you the roadmap needed to improve at any experience level, at a cost that everyone can appreciate.

Aaron Corcoran

Click here to read a full description of the course, and what you get with each of the 3 levels of access. Questions about the course?

Let me know in the comments below.

Categories: Blogs

Deprecating private impediments

TargetProcess - Edge of Chaos Blog - Tue, 03/21/2017 - 17:01

Good day everyone!

In our efforts to continuously improve the Targetprocess experience for you, we're analyzing the performance of some core features, such as visualizing your data on dozens of different views or accessing that data through our API. It's a well-known fact in the software engineering industry that every feature comes with a cost. Unfortunately, sometimes the features we build become obsolete or just don't fire off at all. In a perfect world, such features would be free or extremely cheap to maintain and we could simply ignore them. However, the real world is much more cruel, and quite often there is a cost associated with the ongoing support of these features.

Our "private impediments" feature is a good example of this. According to our analysis, its usage is close to none, but it adds a significant performance overhead to our data querying operations, most notably for inbound/outbound relations lookup. Therefore, we'd like to remove private impediments from Targetprocess in our upcoming release.

So, what does this mean for you?

If you don't use impediments at all, then nothing changes for you. If you use impediments but don't use the "Private" flag on them, then once again nothing changes for you. If you have private impediments, they will be deleted from your Targetprocess account, unless you make them public before the new release.

Wait, what? Are you really going to delete my private impediments?!

Well, yeah, but we've thought this through. There are basically 2 options: either delete them, or make them public. We assumed that it would be terrible to make someone's private data publicly visible. Also, given the fact that the private impediment usage is quite low, and we also continuously make backups for our on-demand instances, we'd be able to restore the data for individual customers if you ask us to.

Hopefully, this all makes sense for you. Don't hesitate to get in touch and contact our support if you have any questions!

Categories: Companies

Axosoft Dev Talk: React and Redux

About SCRUM - Hamid Shojaee Axosoft - Tue, 03/21/2017 - 16:32

In this talk, GitKraken developer Tyler Wanek discusses React and Redux, and their uses for one-way dataflow. Tyler will be using the following repo as an example:

https://github.com/implausible/React-Redux-SoDa-Demo

Fork and play along!

Part 1

Part 2

Part 3

Categories: Companies

How to Add the Right Amount of Detail to User Stories

This blog post refers to a four-part series of videos on overcoming challenges with user stories. Topics covered are conducting story-writing workshops with story maps, splitting stories, and achieving the right level of detail in user stories.

To be notified when you the videos are again available, sign up below:

Notify Me!

Today’s post introduces the third installment in a free series of training videos all about user stories. Available for a limited time only, you can watch all released videos by signing up to the Better User Stories Mini-Course. Already signed up? Check your inbox for a link to the latest video, or continue reading to find out about today’s lesson.

An extremely common problem with user stories is including the right amount of detail.

If you include too much detail in user stories this makes story writing take longer than it would otherwise. As with so many activities in the business world, we want to guard against spending more time on something than necessary.

Also, spending time adding too much detail leads to slower development as tasks like design, coding, and testing do not start until the details have been added. This delay also means it takes longer for the team and its product owner to get feedback from users and stakeholders.

But adding too little detail can lead to different but equally frustrating problems. Leave out detail and the team may struggle to fully implement a story during a sprint as they instead spend time seeking answers.

With too little detail, there’s also an increased chance the development team will go astray on a story by filling in the gaps with what they think is needed rather than asking for clarification.

There’s danger on both sides.

But, when you discover how much detail to add to your stories, it’s like Goldilocks finding the perfect bowl of porridge. Not too much, not too little, but just right.

But how do you discover how much is the right amount?

You can learn how in a new, 13-minute free video training I’ve just released. It’s part of the Better User Stories Mini-Course. To watch the free video, simply sign up here and you’ll get instant access.

Remember, if you’ve already signed up to the course you don’t need to sign in again, just go to www.betteruserstories.com and video #3 will already be unlocked for you.

Adding the right amount of detail--not too much, not too little--is one of the best ways to improve how your team works with user stories. I’m confident this new video will help.

Mike

P.S. This video is only going to be available for a very short period. I encourage you to watch it now at www.betteruserstories.com.

Categories: Blogs

Scrum Days Poland, Warsaw, Poland, June 5-6 2017

Scrum Expert - Tue, 03/21/2017 - 10:00
Scrum Days Poland is a two-day conference focused on Scrum and Agile project management approaches. It aims to create an environment where people can meet, build social networks, do business and have...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Becoming an Agile Leader, Part 5: Learning to Learn

Johanna Rothman - Mon, 03/20/2017 - 20:38

To summarize: your agile transformation is stuck. You’ve thought about your why, as in Becoming an Agile Leader, Part 1: Define Your Why. You’ve started to measure possibilities. You have an idea of who you might talk with as in Becoming an Agile Leader, Part 2: Who to Approach. You’ve considered who you need as allies and how to enlist them in Becoming an Agile Leader, Part 3: How to Create Allies. In Becoming an Agile Leader, Part 4: Determining Next Steps, you thought about creating win-wins with influence. Now, it’s time to think about how you and the people involved (or not involved!) learn.


As an agile leader, you learn in at least two ways: observing and measuring what happens in the organization. (I have any number of posts about qualitative and quantitative measurement.) Just as importantly, you learn by thinking, discussing with others, and working with others. The people in the organization learn in these ways, too.

The Satir Change Model is a great way of showing what happens when people learn. (Learning is a form of change.) Here’s the quick intro to the Change Model: We start off in Old Status Quo, what we did before. Along comes a Foreign Element, where someone introduced some kind of change into the environment. We have uneven performance until we discover our Transforming Idea. Once we have an idea that works, we can continue with Practice and Integration until we have more even performance in New Status Quo.

In the Influential Agile Leader, you have a chance to think alone with your pre-work, by discussing together such as when you draw your map in Part 1, and by working together as in coaching and influence and all the other parts of the day. One of the most important things we do is to debrief all the activities just after you finish them. That way, people have a chance to articulate what they learned and any confusions they still have.

Every person learns in their own way, at their own pace. With interactions, simulations, and some thinking time, people learn in the way they need to learn.

We don’t tell people what to do or how to think. We suggest options we’ve seen work before (in coaching). We might help supply some options for people who don’t know of alternatives. And, the participants work together. Each person’s situation is a little different. That means each person has experiences that enrich the entire room.

Learn to be an agile leader and help your agile transformation progress. Please join us at the next Influential Agile Leader, May 9-10, 2017 in Toronto.

Categories: Blogs

Retromat – Random Scrum Retrospectives Plan Generator

Scrum Expert - Mon, 03/20/2017 - 19:44
Retromat is a free online website that allows to generate random plans for Agile and Scrum retrospectives. Out of a pool of more than 100 activities, it selects one for each of the five phases (stage...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Becoming an Agile Leader, Part 4: Determining Next Steps

Johanna Rothman - Fri, 03/17/2017 - 14:04

To summarize: your agile transformation is stuck. You’ve thought about your why, as in Becoming an Agile Leader, Part 1: Define Your Why. You’ve started to measure possibilities. You have an idea of who you might talk with as in Becoming an Agile Leader, Part 2: Who to Approach. You’ve considered who you need as allies and how to enlist them in Becoming an Agile Leader, Part 3: How to Create Allies.

Now, it’s time to think about what you will do next.

You might be thinking, “I know what to do next. I have a roadmap, I know where we want to be. What are you talking about?”

Influence. I’m talking about thinking about discovering the short-term and longer-term actions that will help your agile transformation succeed with the people who hold the keys to your transformation.

Here’s an example. Patrick (not his real name) wanted to help his organization’s agile transformation. When he came to the Influential Agile Leader, that was his goal: help the transformation. That’s one big goal. By the time we got to the influence section, he realized his goal was too big.

What did he want, right now? He was working with one team who wrote technical stories, had trouble getting to done, didn’t demo or retrospect, and wanted to increase the length of their iteration to four weeks from two weeks. He knew that was probably going in the wrong direction. (There are times when it’s okay to increase the length of the iteration. This team had so much change and push for more delivery, increasing the time was not a good option.)

He thought he had problems in the management. He did, but those weren’t the problems in the team. When he reviewed his why and his map, as in Part 1, he realized that the organization needed an agile approach for frequent delivery of customer value. If this team (and several others) could release value on a regular basis, the pressure from the customers and management would lessen. He could work with the managers on the project portfolio and other management problems. But, he was sure that the way to make this happen was to help this team deliver frequently.

He realized he had two influential people to work with: the architect and the QA lead. Both of those people looked as if they were “resisting.” In reality, the architect wanted the developers to refactor to patterns to keep the code base clean. The QA lead thought they needed plans before creating tests and was looking for the “perfect” test automation tool.

He decided that his specific goal was to “Help this team deliver value at least as often as every two weeks. Sustain that delivery over six months.” That goal—a subset of “go agile”—allowed him to work first with the architect and then with the QA lead and then both (yes, he practiced all three conversations in the workshop) to achieve his small goal.

Patrick practiced exploring the short-term and long-term deliverables in conversations in the workshop. While the conversations didn’t go precisely the same way back at work, he had enough practice to move between influence and coaching to see what he could do with the people in his context.

It took the team three more iterations to start delivering small stories, but they did. He spent time enlisting the architect in working in the team with the team members to deliver small stories that kept the code base clean. He asked the architect for help in how to work with the QA lead. The architect showed the lead how to start automation and refactor so the testers could test even before the developers had completed the code.

It took that team three more months to be able to regularly deliver value every week, without waiting for the end of the iteration.

Patrick’s original roadmap was great. And, once he started working with teams and management, he needed to adjust the deliverables he and the other coaches had originally planned. The influence conversations allowed him to see the other people’s concerns, and consider what small deliverables all along the way would help this team succeed.

Some of what he learned with this team helped the other teams. And, the other teams had different problems. He used different coaching and influence conversations with different people.

If you want to experience learning how to influence and who, in the context of helping your agile transformation continue, join us at the next Influential Agile Leader, May 9-10, 2017 in Toronto.

My next post is about our participants learn.

Categories: Blogs

Works on my Machine

Leading Agile - Fri, 03/17/2017 - 13:00

One of the most insidious obstacles to continuous delivery (and to continuous flow in software delivery generally) is the works-on-my-machine phenomenon. Anyone who has worked on a software development team or an infrastructure support team has experienced it. Anyone who works with such teams has heard the phrase spoken during (attempted) demos. The issue is so common there’s even a badge for it:

Perhaps you have earned this badge yourself. I have several. You should see my trophy room.

There’s a longstanding tradition on Agile teams that may have originated at ThoughtWorks around the turn of the century. It goes like this: When someone violates the ancient engineering principle, “Don’t do anything stupid on purpose,” they have to pay a penalty. The penalty might be to drop a dollar into the team snack jar, or something much worse (for an introverted technical type), like standing in front of the team and singing a song. To explain a failed demo with a glib “<shrug>Works on my machine!</shrug>” qualifies.

It may not be possible to avoid the problem in all situations. As Forrest Gump said…well, you know what he said. But we can minimize the problem by paying attention to a few obvious things. (Yes, I understand “obvious” is a word to be used advisedly.)

Pitfall 1: Leftover configuration

Problem: Leftover configuration from previous work enables the code to work on the development environment (and maybe the test environment, too) while it fails on other environments.

Pitfall 2: Development/test configuration differs from production

The solutions to this pitfall are so similar to those for Pitfall 1 that I’m going to group the two.

Solution (tl;dr): Don’t reuse environments.

Common situation: Many developers set up an environment they like on their laptop/desktop or on the team’s shared development environment. The environment grows from project to project, as more libraries are added and more configuration options are set. Sometimes the configurations conflict with one another, and teams/individuals often make manual configuration adjustments depending on which project is active at the moment. It doesn’t take long for the development configuration to become very different from the configuration of the target production environment. Libraries that are present on the development system may not exist on the production system. You may run your local tests assuming you’ve configured things the same as production only to discover later that you’ve been using a different version of a key library than the one in production. Subtle and unpredictable differences in behavior occur across development, test, and production environments. The situation creates challenges not only during development, but also during production support work when we’re trying to reproduce reported behavior.

Solution (long): Create an isolated, dedicated development environment for each project

There’s more than one practical approach. You can probably think of several. Here are a few possibilities:

  • Provision a new VM (locally, on your machine) for each project. (I had to add “locally, on your machine” because I’ve learned that in many larger organizations, developers must jump through bureaucratic hoops to get access to a VM, and VMs are managed solely by a separate functional silo. Go figure.)
  • Do your development in an isolated environment (including testing in the lower levels of the test automation pyramid), like Docker or similar.
  • Do your development on a cloud-based development environment that is provisioned by the cloud provider when you define a new project.
  • Set up your continuous integration (CI) pipeline to provision a fresh VM for each build/test run, to ensure nothing will be left over from the last build that might pollute the results of the current build.
  • Set up your continuous delivery (CD) pipeline to provision a fresh execution environment for higher-level testing and for production, rather than promoting code and configuration files into an existing environment (for the same reason). Note that this approach also gives you the advantage of linting, style-checking, and validating the provisioning scripts in the normal course of a build/deploy cycle. Convenient.

All those options won’t be feasible for every conceivable platform or stack. Pick and choose, and roll your own as appropriate. In general, all these things are pretty easy to do if you’re working on Linux. All of them can be done for other *nix systems with some effort. Most of them are reasonably easy to do with Windows; the only issue there is licensing, and if your company has an enterprise license, you’re all set. For other platforms, such as IBM zOS or HP NonStop, expect to do some hand-rolling of tools.

Anything that’s feasible in your situation and that helps you isolate your development and test environments will be helpful. If you can’t do all these things in your situation, don’t worry about it. Just do what you can do.

Provision a new VM locally

If you’re working on a desktop, laptop, or shared development server running Linux, FreeBSD, Solaris, Windows, or OSX, then you’re in good shape. You can use virtualization software such as VirtualBox or VMware to stand up and tear down local VMs at will. For the less-mainstream platforms, you may have to build the virtualization tool from source.

One thing I usually recommend is that developers cultivate an attitude of laziness in themselves. Well, the right kind of laziness, that is. You shouldn’t feel perfectly happy provisioning a server manually more than once. Take the time during that first provisioning exercise to script the things you discover along the way. Then you won’t have to remember them and repeat the same mis-steps again. (Well, unless you enjoy that sort of thing, of course.)

For example, here are a few provisioning scripts that I’ve come up with when I needed to set up development environments. These are all based on Ubuntu Linux and written in Bash. I don’t know if they’ll help you, but they work on my machine.

If your company is running RedHat Linux in production, you’ll probably want to adjust these scripts to run on CentOS or Fedora, so that your development environments will be reasonably close to the target environments. No big deal.

If you want to be even lazier, you can use a tool like Vagrant to simplify the configuration definitions for your VMs.

One more thing: Whatever scripts you write and whatever definition files you write for provisioning tools, keep them under version control along with each project. Make sure whatever is in version control for a given project is everything necessary to work on that project…code, tests, documentation, scripts…everything. This is rather important, I think.

Do your development in a container

One way of isolating your development environment is to run it in a container. Most of the tools you’ll read about when you search for information about containers are really orchestration tools intended to help us manage multiple containers, typically in a production environment. For local development purposes, you really don’t need that much functionality. There are a couple of practical containers for this purpose:


These are Linux-based. Whether it’s practical for you to containerize your development environment depends on what technologies you need. To containerize a development environment for another OS, such as Windows, may not be worth the effort over just running a full-blown VM. For other platforms, it’s probably impossible to containerize a development environment.

Develop in the cloud

This is a relatively new option, and it’s feasible for a limited set of technologies. The advantage over building a local development environment is that you can stand up a fresh environment for each project, guaranteeing you won’t have any components or configuration settings left over from previous work. Here are a couple of options:


Expect to see these environments improve, and expect to see more players in this market. Check which technologies and languages are supported so see whether one of these will be a fit for your needs. Because of the rapid pace of change, there’s no sense in listing what’s available as of the date of this article.

Generate test environments on the fly as part of your CI build

Once you have a script that spins up a VM or configures a container, it’s easy to add it to your CI build. The advantage is that your tests will run on a pristine environment, with no chance of false positives due to leftover configuration from previous versions of the application or from other applications that had previously shared the same static test environment, or because of test data modified in a previous test run.

Many people have scripts that they’ve hacked up to simplify their lives, but they may not be suitable for unattended execution. Your scripts (or the tools you use to interpret declarative configuration specifications) have to be able to run without issuing any prompts (such as prompting for an administrator password). They also need to be idempotent (that is, it won’t do any harm to run them multiple times, in case of restarts). Any runtime values that must be provided to the script have to be obtainable by the script as it runs, and not require any manual “tweaking” prior to each run.

The idea of “generating an environment” may sound infeasible for some stacks. Take the suggestion broadly. For a Linux environment, it’s pretty common to create a VM whenever you need one. For other environments, you may not be able to do exactly that, but there may be some steps you can take based on the general notion of creating an environment on the fly.

For example, a team working on a CICS application on an IBM mainframe can define and start a CICS environment any time by running it as a standard job. In the early 1980s, we used to do that routinely. As the 1980s dragged on (and continued through the 1990s and 2000s, in some organizations), the world of corporate IT became increasingly bureaucratized until this capability was taken out of developers’ hands.

Strangely, as of 2017 very few development teams have the option to run their own CICS environments for experimentation, development, and initial testing. I say “strangely” because so many other aspects of our working lives have improved dramatically, while that aspect seems to have moved in retrograde. We don’t have such problems working on the front end of our applications, but when we move to the back end we fall through a sort of time warp.

From a purely technical point of view, there’s nothing to stop a development team from doing this. It qualifies as “generating an environment,” in my view. You can’t run a CICS system “in the cloud” or “on a VM” (at least, not as of 2017), but you can apply “cloud thinking” to the challenge of managing your resources.

Similarly, you can apply “cloud thinking” to other resources in your environment, as well. Use your imagination and creativity. Isn’t that why you chose this field of work, after all?

Generate production environments on the fly as part of your CD pipeline

This suggestion is pretty much the same as the previous one, except that it occurs later in the CI/CD pipeline. Once you have some form of automated deployment in place, you can extend that process to include automatically spinning up VMs or automatically reloading and provisioning hardware servers as part of the deployment process. At that point, “deployment” really means creating and provisioning the target environment, as opposed to moving code into an existing environment.

This approach solves a number of problems beyond simple configuration differences. For instance, if a hacker has introduced anything to the production environment, rebuilding that environment out of source that you control eliminates that malware. People are discovering there’s value in rebuilding production machines and VMs frequently even if there are no changes to “deploy,” for that reason as well as to avoid “configuration drift” that occurs when we apply changes over time to a long-running instance.

Many organizations run Windows servers in production, mainly to support third-party packages that require that OS. An issue with deploying to an existing Windows server is that many applications require an installer to be present on the target instance. Generally, information security people frown on having installers available on any production instance. (FWIW, I agree with them.)

If you create a Windows VM or provision a Windows server on the fly from controlled sources, then you don’t need the installer once the provisioning is complete. You won’t re-install an application; if a change is necessary, you’ll rebuild the entire instance. You can prepare the environment before it’s accessible in production, and then delete any installers that were used to provision it. So, this approach addresses more than just the works-on-my-machine problem.

When it comes to back end systems like zOS, you won’t be spinning up your own CICS regions and LPARs for production deployment. The “cloud thinking” in that case is to have two identical production environments. Deployment then becomes a matter of switching traffic between the two environments, rather than migrating code. This makes it easier to implement production releases without impacting customers. It also helps alleviate the works-on-my-machine problem, as testing late in the delivery cycle occurs on a real production environment (even if customers aren’t pointed to it yet).

The usual objection to this is the cost (that is, fees paid to IBM) to support twin environments. This objection is usually raised by people who have not fully analyzed the costs of all the delay and rework inherent in doing things the “old way.”

Pitfall 3: Unpleasant surprises when code is merged

Problem: Different teams and individuals handle code check-out and check-in in various ways. Some check out code once and modify it throughout the course of a project, possibly over a period of weeks or months. Others commit small changes frequently, updating their local copy and committing changes many times per day. Most teams fall somewhere between those extremes.

Generally, the longer you keep code checked out and the more changes you make to it, the greater the chances of a collision when you merge. It’s also likely that you will have forgotten exactly why you made every little change, and so will the other people who have modified the same chunks of code. Merges can be a hassle.

During these merge events, all other value-add work stops. Everyone is trying to figure out how to merge the changes. Tempers flare. Everyone can claim, accurately, that the system works on their machine.

Solution: A simple way to avoid this sort of thing is to commit small changes frequently, run the test suite with everyone’s changes in place, and deal with minor collisions quickly before memory fades. It’s substantially less stressful.

The best part is you don’t need any special tooling to do this. It’s just a question of self-discipline. On the other hand, it only takes one individual who keeps code checked out for a long time to mess everyone else up. Be aware of that, and kindly help your colleagues establish good habits.

Pitfall 4: Integration errors discovered late

Problem: This problem is similar to Pitfall 3, but one level of abstraction higher. Even if a team commits small changes frequently and runs a comprehensive suite of automated tests with every commit, they may experience significant issues integrating their code with other components of the solution, or interacting with other applications in context.

The code may work on my machine, as well as on my team’s integration test environment, but as soon as we take the next step forward, all hell breaks loose.

Solution: There are a couple of solutions to this problem. The first is static code analysis. It’s becoming the norm for a continuous integration pipeline to include static code analysis as part of every build. This occurs before the code is compiled. Static code analysis tools examine the source code as text, looking for patterns that are known to result in integration errors (among other things).

Static code analysis can detect structural problems in the code such as cyclic dependencies and high cyclomatic complexity, as well as other basic problems like dead code and violations of coding standards that tend to increase cruft in a codebase. It’s just the sort of cruft that causes merge hassles, too.

A related suggestion is to take any warning-level errors from static code analysis tools and from compilers as real errors. Accumulating warning-level errors is a great way to end up with mysterious, unexpected behaviors at runtime.

The second solution is to integrate components and run automated integration test suites frequently. Set up the CI pipeline so that when all unit-level checks pass, then integration-level checks are executed automatically. Let failures at that level break the build, just as you do with the unit-level checks.

With these two methods, you can detect integration errors as early as possible in the delivery pipeline. The earlier you detect a problem, the easier it is to fix.

Pitfall 5: Deployments are nightmarish all-night marathons

Problem: Circa 2017 it’s still common to find organizations where people have “release parties” whenever they deploy code to production. Release parties are just like all-night frat parties, only without the fun.

The problem is that the first time applications are executed in a production-like environment is when they are executed in the real production environment. Many issues only become visible when the team tries to deploy to production.

Of course, there’s no time or budget allocated for that. People working in a rush may get the system up and running somehow, but often at the cost of regressions that pop up later in the form of production support issues.

And it’s all because at each stage of the delivery pipeline, the system “worked on my machine,” whether a developer’s laptop, a shared test environment configured differently from production, or some other unreliable environment.

Solution: The solution is to configure every environment throughout the delivery pipeline as close to production as possible. The following are general guidelines that you may need to modify depending on local circumstances.

If you have a staging environment, rather than twin production environments, it should be configured with all internal interfaces live and external interfaces stubbed, mocked, or virtualized. Even if this is as far as you take the idea, it will probably eliminate the need for release parties. But if you can, it’s good to continue upstream in the pipeline, to reduce unexpected delays in promoting code along.

Test environments between development and staging should be running the same version of the OS and libraries as production. They should be isolated at the appropriate boundary based on the scope of testing to be performed.

At the beginning of the pipeline, if it’s possible develop on the same OS and same general configuration as production. It’s likely you will not have as much memory or as many processors as in the production environment. The development environment also will not have any live interfaces; all dependencies external to the application will be faked.

At a minimum, match the OS and release level to production as closely as you can. For instance, if you’ll be deploying to Windows Server 2016, then use a Windows Server 2016 VM to run your quick CI build and unit test suite. Windows Server 2016 is based on NT 10, so do your development work on Windows 10 because it’s also based on NT 10. Similarly, if the production environment is Windows Server 2008 R2 (based on NT 6.1) then develop on Windows 7 (also based on NT 6.1). You won’t be able to eliminate every single configuration difference, but you will be able to avoid the majority of incompatibilities.

Follow the same rule of thumb for Linux targets and development systems. For instance, if you will deploy to RHEL 7.3 (kernel version 3.10.x), then run unit tests on the same OS if possible. Otherwise, look for (or build) a version of CentOS based on the same kernel version as your production RHEL (don’t assume). At a minimum, run unit tests on a Linux distro based on the same kernel version as the target production instance. Do your development on CentOS or a Fedora-based distro to minimize inconsistencies with RHEL.

If you’re using a dynamic infrastructure management approach that includes building OS instances from source, then this problem becomes much easier to control. You can build your development, test, and production environments from the same sources, assuring version consistency throughout the delivery pipeline. But the reality is that very few organizations are managing infrastructure in this way as of 2017. It’s more likely that you’ll configure and provision OS instances based on a published ISO, and then install packages from a private or public repo. You’ll have to pay close attention to versions.

If you’re doing development work on your own laptop or desktop, and you’re using a cross-platform language (Ruby, Python, Java, etc.), you might think it doesn’t matter which OS you use. You might have a nice development stack on Windows or OSX (or whatever) that you’re comfortable with. Even so, it’s a good idea to spin up a local VM running an OS that’s closer to the production environment, just to avoid unexpected surprises.

For embedded development where the development processor is different from the target processor, include a compile step in your low-level TDD cycle with the compiler options set for the target platform. This can expose errors that don’t occur when you compile for the development platform. Sometimes the same version of the same library will exhibit different behaviors when executed on different processors.

Another suggestion for embedded development is to constrain your development environment to have the same memory limits and other resource constraints as the target platform. You can catch certain types of errors early by doing this.

For some of the older back end platforms, it’s possible to do development and unit testing off-platform for convenience. Fairly early in the delivery pipeline, you’ll want to upload your source to an environment on the target platform and buld/test there.

For instance, for a C++ application on, say, HP NonStop, it’s convenient to do TDD on whatever local environment you like (assuming that’s feasible for the type of application), using any compiler and a unit testing framework like CppUnit.

Similarly, it’s convenient to do COBOL development and unit testing on a Linux instance using GnuCOBOL; much faster and easier than using OEDIT on-platform for fine-grained TDD.

But in these cases the target execution environment is very different from the development environment. You’ll want to exercise the code on-platform early in the delivery pipeline to eliminate works-on-my-machine surprises.

Summary

The author’s observation is that the works-on-my-machine problem is one of the leading causes of developer stress and lost time. The author further observes that the main cause of the works-on-my-machine problem is differences in configuration across development, test, and production environments.

The basic advice is to avoid configuration differences to the extent possible. Take pains to ensure all environments are as similar to production as is practical. Pay attention to OS kernel versions, library versions, API versions, compiler versions, and the versions of any home-grown utilities and libraries. When differences can’t be avoided, then make note of them and treat them as risks. Wrap them in test cases to provide early warning of any issues.

The second suggestion is to automate as much testing as possible at different levels of abstraction, merge code frequently, build the application frequently, run the automated test suites frequently, deploy frequently, and (where feasible) build the execution environment frequently. This will help you detect problems early, while the most recent changes are still fresh in your mind, and while the issues are still minor.

Let’s fix the world so that the next generation of software developers doesn’t understand the phrase, “Works on my machine.”

The post Works on my Machine appeared first on LeadingAgile.

Categories: Blogs

Five Simple but Powerful Ways to Split User Stories

This blog post refers to a four-part series of videos on overcoming challenges with user stories. Topics covered are conducting story-writing workshops with story maps, splitting stories, and achieving the right level of detail in user stories.

To be notified when you the videos are again available, sign up below:

Notify Me!

Today’s post introduces the second installment in a free series of training videos all about user stories. Available for a limited time only, you can watch all released videos by signing up to the Better User Stories Mini-Course. Already signed up? Check your inbox for a link to the latest video, or continue reading to find out about today’s lesson.

One of the most common struggles faced by agile teams is the need to split user stories. I'm sure you've struggled with this. I certainly did at first.

In fact, when I first began using Scrum, some of our product backlog items were so big that we occasionally opted for six-week sprints. With a bit more experience, though, that team and I saw enough ways to split work that we could have done one-day sprints if we'd wanted.

But splitting stories was hard at first. Really hard.

But I've got some good news for you. Not only have I figured out how to split stories on my own, I've learned how to explain how to do it so that anyone can quickly become an expert.

What I discovered is that almost every story can be split with one of five techniques. Learn those five simple techniques and you're set.

Even better, the five techniques form an easily memorable acronym: SPIDR.

I've just released a new, 20-minute, free video training that describes each of these techniques as part of the Better User Stories Mini-Course. To watch it simply sign up here and you’ll get instant access.

Remember, if you’ve already signed up to the course you don’t need to sign in again, just check your inbox for an email from me with a link to the latest lesson.

Unless you've already cracked the code on splitting stories, you definitely want to learn the five techniques that make up the SPIDR approach by watching this free video training.

Mike

P.S. This video is only going to be available for a very short period. I encourage you to watch it now at https://www.betteruserstories.com.

Categories: Blogs

How to open views with the currently selected Projects and Teams

TargetProcess - Edge of Chaos Blog - Thu, 03/16/2017 - 10:22

Some time ago, we redesigned the Projects-Teams selector. It became a part of views, and the global selector was removed. This helped to standardize and simplify views, but made it impossible to set Projects and Teams just once and navigate through several views with that selection.

To make this scenario work, in v.3.11.0 we’ve implemented a keyboard shortcut that lets you navigate to a new view with the currently selected Projects and Teams.

As usual, you can select the needed Projects and Teams in the selector on the top of the view (this selection won't override the predefined Projects and Teams for this view).

1selectcontext

Explore the data. If you need to navigate to another view with the same selected Projects and Teams, simply hold Alt and click on the view you wish to explore.

3altclick

As a result, the view will be opened using the Projects and Teams that were selected on the previous view.

4viewopened

You can use Alt+Click to navigate through any number of views with the currently selected Projects and Teams.

For more details on how the Projects and Teams selector works, see the guide post.

Categories: Companies

Scrum Knowledge Sharing

SpiraPlan is a agile project management system designed specifically for methodologies such as scrum, XP and Kanban.