Skip to content

Feed aggregator

Book Review:: Agile Noir by Lancer Kind

First, allow me to layout some ground rules and a touch of the backstory...

I'm not a professional book reviewer, nor paid in anyway to read.  But if I could get that gig... I'd be a happy camper.  I've never written a book, but I've hacked out some code, a few articles, some of which might be considered book reviews.  I've worked in the Agile industry for more than a decade (but who's counting), and so - I may be a little close to the topic to have proper literary impartial bias.  In fact let me just go ahead and be explicit - I've done this, been there, got the t-shirt; I shit you not - this shit is for real!
Agile Noir by Lancer Kind
Now the ground rules...  I think this review will be written ... what's the word... while I'm reading, at the same time, without much delay in the reading - writing phases....in situ.... iteratively... oh I give up...

So don't be surprised - dear reader - if I just drop off in the middle...
                       ... maybe check back every week until I finish
Mar 22,
I've studied the cover... quite a nice graphic - to bad the whole novel isn't a graphic novel; oh - maybe it would be too bloody,  I could see Agile Noir becoming a Tarantino film.  As I sat looking at my book to-do stack... I skipped a few levels down the stack and pulled out Lancer Kind's 2016 Agile Noir.  I have read some of his previous comics titled Scrum Noir (vol 1, 2, 3).  So maybe I know what to expect - should be a fun romp in the fast lane with lots of inside the industry puns, innuendo and metaphors.

Well the damn dedication just reeks of an Agile Coach - Servant Leader (puke, barf.... moving on).

The High Cost of Schedule Slip
Now you may not find the situation Kartar finds himself in funny...  allow me to add some overtones of irony....  I'm going to go out on a racist limb and suggest that Kartar is an Indian.  That he is working in the heart of the Indian nation (Los Wages, NV), perhaps on a job for an Italian crime boss.  And none of these circumstances have anything to do with one of the world of science's biggest failures - Columbus's discover of the New World - which the thought was India, and named it's inhabitants there by creating the confusion we will have to deal with evermore.  Now Columbus was of course searching for a way to reduce the schedule required for shipping spices.

Kartar appears to be very emerged in planning and the art/science/pesdo-truth of planning and predicting the future of projects.  And he may be a master with the Gantt chart (which is footnoted on page 18).

This is all ringing just too true ... and I'm envisioning it in the style of a 1956 black and white film...

Kartar is the metaphor of his project... it seems that it's not quite on schedule... he's late to a just announced meeting with some superior and is driving at break neck speed on loose sand in the Vegas out skirts creating over bumps and ditches in his car with the accelerator pinned to the floor - because some people in a van might be trying to kill him.  Happens ALL - THE - TIME.





See Also:
Scrum Noir - several volumes of graphic novel about scrum masters and the projects they encounter - also by Lancer Kind
I will have a Double Expresso - Amazon review of Scrum Noir.
Categories: Blogs

7 Sins of Scrum and other Agile Antipatterns

Scrum Expert - Thu, 03/23/2017 - 22:01
This is about agile “anti-patterns”: “something that looks like a good idea, but which backfires badly when applied” (Coplien). The presenter has been around Agile development from before it was...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Favro Will Spin-Off from Hansoft

Scrum Expert - Thu, 03/23/2017 - 18:35
Favro, a cloud-based planning and collaboration app for agile businesses, has announced plans to separate from Hansoft, a project management software firm. The separation of the two businesses will...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

An Introduction to Monoids

About SCRUM - Hamid Shojaee Axosoft - Thu, 03/23/2017 - 17:30

When you think of programming, you might not immediately think of mathematics. In the day-to-day practice of writing software it’s often hard to see much theory behind REST requests and database schema migrations.

There is, however, a rich world of theory that applies to the work we do, from basic data structures to architectural patterns. This is the field of category theory, sometimes described as “the mathematics of mathematics”. Category theory is concerned with structure and the ways of converting between structures.

NOTE: This article is based on an Axosoft Dev Talk I recently gave, also titled Practical Category Theory: Monoids. Watch that video or keep reading!

Hopefully this sounds like the kind of thing you’re familiar with as a programmer, and if it doesn’t, hopefully it just sounds like a good ol’ time. After all, category theory doesn’t care about specifics; it’s abstract, which helps it be applicable to a great many fields.

Software therefore needs to apply the concepts of category theory to match the realities of programs that execute on real computers. You may have heard of some of these, especially if you’ve looked into statically typed functional programming languages such as Haskell, OCaml, F#, or Scala.

Semigroups

In talking about Monoids, we actually need to talk about two structures: the Semigroup and the Monoid. A Monoid is a superset of a Semigroup, so let’s start there. The Semigroup is a simple structure that has to do with combining. In the following example, I’ve combined two strings with an operator such as +:

helloWorld = “Hello” + “world”

And in this example, I’ve created a concatenating list:

parts = concat([“Parsley”, “Sage”, “Rosemary”], [“Thyme”])

Both these examples have some similar properties. They take two values of a certain shape and combine them into a single value of the same shape. We don’t take two strings and combine them to get a number or an array; it’s quite reasonable to expect the result remain a string.

Let’s start with just this single property, that there is some “append” operation that takes in two values of some type and gives back a combined result of that same type. In various languages this might look like:

C#
T Append(T first, T second)
Javascript with Flow
append(first : T, second : T) : T
Haskell, Elm, PureScript
append :: t -> t -> t

This may not seem like a particularly novel thing to do, after all, you probably do this sort of operation all the time on various kinds of data. Remember though, that we’re talking about a concept from category theory, so while the above type signatures are the end of the story in terms of what that compiler can enforce, we are obligated to obey a few more rules.

These rules are “things that must remain true,” and in math we call these laws. You’re already familiar with tons of these laws even if you haven’t heard them referred to as such. One example is with addition. When adding two numbers you can freely swap the order of the arguments, so 1 + 2 is the same as 2 + 1. This is the commutative property, and the law here is that swapping the order doesn’t affect the resulting value.

For Semigroups, we must obey the associative property, which has the law that the order in which you combine things does not matter. That is

(“hello” + “ “) + “world”

is the same as

“hello” + (“ “ + “world”)

As you can see this doesn’t mean we can swap the order, just that we can do our combine operation on any two adjacent elements, and when we’re all done we’ll end up with the same result.

The append operation is whatever you want it to be (as long as it obeys the rules)

Consider CSS classes. They’re kind of like a string, except when you combine them you don’t just shove them together; it’s more like we’re doing a join operation using a comma and space as the separator:

“app” + “button” + “selected”

should turn out to be

“app, button, selected”

CSS classes aren’t the only type where this is true; if you think in terms of Semigroups, there are lots of cases where fundamentally we’re combining two things of the same type, and that type has a unique rule around how to do the combination.

That’s actually all there is to Semigroup, an append operation that is associative. If you’re feeling underwhelmed, don’t feel bad! I was hardly excited when I first encountered the idea. Let’s dig a bit deeper to see how this unassuming structure punches quite a bit above its weight class.

For the purposes of grounding our discussion, let’s pick a few types that satisfy the rules of Semigroup. List is a straightforward data structure that every mainstream language supports (even if it’s named something else), so we’ll talk in terms of Lists moving forward.

Depending on the language you’re using, you might be able to express Semigroup as an interface or protocol, and you might be able to implement that interface/protocol on your language’s List type. If not, don’t fret! These ideas are still useful and applicable even if your language doesn’t give you a way to represent it in your type system (or if you’re using a dynamic language and don’t have a static type system). I’ll use square brackets to represent a list and the ++ operator for my List’s “append” operation.

bandMembers = [“Stevie”, “Lindsay”] ++ [“Christine”, “John”, “Mick”]

The second type we’ll consider is the String. For our discussion, we’ll use the same ++ operator to concatenate Strings.

greeting = “Hello” ++ “ “ ++ “World”
Interesting attribute #1: Semigroups are partition friendly

Consider the following SQL related code:

databaseQuery = selection ++ table ++ constraints

At the end of the day, we want to have a nice query we can use to get a result from the database. If our top level value is a String that makes up our query, then the associativity of Semigroups means that we have a lot of flexibility in building this up.

One example of this might be in building up the selection. We want to end up with something like:

select * from

or if we want specific columns:

select a, b from

It seems reasonable to build up this String from some fixed parts, the “select” and “from” mixed with some customizable parts, the “*” or “a, b”.

selection = “select “ ++ ”a, b” ++ “ from “

The resulting String will be combined with the other elements of the query, but it doesn’t matter when we decide to combine the elements, we could defer the column name conversion to a String by leaving it as a List.

selectionParts = [“select ”, [“a”, “b”], “ from”]

When we’re ready for a selection String, we could combine the column names first, then combine the resulting String with the other two parts. This same flexibility in when to go from a more structured form into a final string, is present in other parts of the query as well.

table = “myTableName”
constraintParts = [“where “, constraint1, “ AND “, constraint2]
constraint1 = [columnName, “ IS “, value]
constraint2 = [columnName2, “ NOT “, value2]

As developers, we can decide when it’s best to go from the representation of values in Lists to a final String form—either for a part or for the whole of our query. We can defer this decision through the layers of functions used to build up the query, and we know we can go from parts to whole for any section of the query—independent of any other part, and nothing will change.

We originally had:

databaseQuery = selection ++ table ++ constraints

Let’s see what the query might look like if we had all of the bits inlined:

databaseQuery = “select “ ++ “a, b” ++ “ from “ ++ “myTableName” ++ “ where “ ++ columnName ++ “ IS “ ++ value ++ “ AND “ ++ columnName2 ++ “ NOT “ ++ value2

If we were to recreate the order of operations in the original version, it would require us to group our values into 3 parts:

databaseQuery = (“select “ ++ “a, b” ++ “ from “) ++ (“myTableName”) ++ (“ where “ ++ columnName ++ “ IS “ ++ value ++ “ AND “ ++ columnName2 ++ “ NOT “ ++ value2)

But because of associativity, these are exactly the same. We can group the individual combinations however we want, which is effectively what we would be doing along the way by collapsing the various query parts into selection, table, and constraints.

This might seem obvious (after all we’re dealing with Strings), but the ability to arbitrarily partition your data and combine it, is useful for more exotic types as well. Imagine you have a type Log that represents a single logging event and is a Semigroup. You might have files filled with Logs spread across files that are rotated, grouped by date or time range, or some other bucketing criteria. Being a Semigroup means you can combine adjacent Logs into an aggregate Log in any order you want. You could split up the work of combining Logs across multiple threads and know that it is still 100% safe to combine the individual thread’s results into a final result.

Interesting attribute #2: Semigroups are “incremental combination” friendly

Carrying on with the idea of Logs, consider a remote logging service that accepts Logs from your various servers and aggregates them down into something you can do reporting on. If we still have a Log type that is a Semigroup, we have a lot of freedom in how we proceed. For example, when a Log is generated by one of your servers, it could send it immediately, or combine some number of Logs before sending them:

Remote Logging Server Worker Server Log <-worker sends Log immediately– Log Log ++ Log Log Remote Logging Server Worker Server Local Batching Log Log –> Log Log Log Log –> Log Log Log ++ Log …repeat until n logs or time limit reached… Log <-worker sends batched Log– Log

On the receiving end, you could accept the Logs and combine them with the main log as they arrive, or you could batch them:

Remote Logging Server Worker Servers Log <– Log Log ++ Log Remote Logging Server Local Batching Worker Servers Log <– Log Log Log Log Log <– Log Log Log ++ Log …repeat until n Logs or time limit reached… Log <– Log Log ++ Log Monoids

Ok, that’s a lot of things we can do with Semigroups, so where do Monoids fit in? Thankfully, Monoids are, by comparison, very simple. They are everything that a Semigroup is, plus an “empty” value of the type.

For Strings, this would be an empty string; for Lists it would be an empty List; for Logs… well, that’s slightly less clear. If you can define a concept of an empty value for your type though, then congratulations, your type is a Monoid. The types signatures for this sort of thing look like this:

C#
T Empty
Javascript with Flow
empty : T
Haskell, PureScript, Elm
empty :: t

You guessed it! There are laws that go along with Monoids, and this time there are two: the left and right identity. These state that combining the empty element with any value shouldn’t change the meaning of the value. That is, the empty element is an identity value for that type. Looking at the String we see that

“” ++ “some string”

is the same as

“some string”

This works on either the left or right side (hence the left/right identity):

“some string” ++ “”

is the same as

“some string”

This comes in handy when you want to append values but you don’t want to distinguish between having zero and having one already. Consider the case of the remote logging service:

Remote Logging Server Worker Server Log <-worker sends Log immediately– Log Log ++ Log

Where did that Log that the Remote Logging Service starts with come from? If Log is a Semigroup then we’d need to generate it somehow. However, if Log is a Monoid, we have our answer: the initial Log value on the Remote Logging Server is the “empty” value of Log. Since combining the empty value with a Log doesn’t change the Log, this is safe to assume, and now we can deal exclusively in terms of Log ++ Log operations. Less special cases means happier developers™.

There are lots more fun use cases for Monoids. One example is taking a Monoidal type in a list and running it through a fold/reduce/Aggregate operation, where the fold function is appended and the initial value is empty. You could write a specialized fold that only works for Lists of Monoids but can aggregate them with no other information needed.

Go forth and aggregate!

For something so simple as having an append operation, an empty value, and the associative property, I hope you’ll agree there’s a lot of depth to Semigroups and Monoids. This is just the tip of the categorical iceberg though. Due to its abstract nature, category theory can describe a variety of structures, and best yet, be supported by lots of formal reasoning (aka proofs).

This formal reasoning can give us a lot of confidence that what we’re doing really will work as long as we’re implementing our types in accordance with the laws of the structure. If you’re looking for next steps, Functor is a mild step up in complexity, but it’s equally as rich in terms of applications and reach.

Categories: Companies

Dialogue on Prerequisites for Collaboration

IDEO-University 'From Ideas to Action' Lesson 1.

Join the dialogue on G+ Agile+ group.

Dialogue on Collaboration on Facebook (PDF)


Collaboration starts with who we are and our story - not the technology or the data
"The Future of Work Is Social Collaboration from Inside Out, where people connect around the why of work from who they really are as individuals in community.
They collaborate in generative conversations and co-create what’s next, i.e. their unique Contribution of value to society – what we might call Social Good.
They collaborate by taking the time to appreciate and align each other’s unique, hard wired, natural strengths, creating new levels of authentic and trusting relationships to take the Social Journey."Jeremy ScrivensDirector at The Emotional Economy at Work




What does dialogue mean... what does it contribute to collaboration?  Here's what the inventor of the internet Al Gore had to say about this:

Audie Cornish speaks with former Vice President Al Gore about the new edition of his book, The Assault On Reason.
Well, others have noted a free press is the immune system 
of representative democracy. And as I wrote 10 years ago, American democracy is in grave danger from the changes in the environment in which ideas either live and spread or wither and die. I think that the trends that I wrote about 10 years ago have continued and worsened, and the hoped-for remedies that can come from online discourse have been slow to mature. I remain optimistic that ultimately free speech and a free press where individuals have access to the dialogue will have a self-correcting quality. -- Al GoreExcerpt from NPR interview with Al Gore by Audie Cornish March 14, 2017. Heard on  All Things Considered.

See Also:
Mob Programming by Woody Zuill

[View the story "Dialogue on Prerequisites for Collaboration" on Storify]
Categories: Blogs

Cycle Time and Lead Time

Our organization is starting to talk about measuring Cycle Time and Lead Time on our software engineering stories.  It's just an observation, but few people seem to understand these measurement concepts, but everyone is talking about them.  This is a bad omen...  wish I could help illustrate these terms.  Because I doubt the measurements will be very accurate if the community doesn't understand when to start the clock, and just as important - when to stop it.

[For the nature of confusion around this terms compare and contrast these:  Agile Alliance Glossary; Six Sigma; KanbanTool.com; Lean Glossary.]

The team I'm working with had a toy basket ball goal over their Scrum board...  like many cheep toys the rim broke.  Someone bought a superior mini goal, it's a nice heavy quarter inch plastic board with a spring loaded rim - not a cheep toy.  The team used "Command Strips" to mount it but they didn't hold for long.

The team convinced me there was a correlation between their basketball points on the charts and the teams sprint burndown chart.  Not cause and effect, but correlation; have you ever stopped to think what that really means?  Could it mean that something in the environment beyond your ability to measure is an actual cause to the effect you desire?

I asked the head person at the site for advice, how could we get the goal mounted in our area?  He suggested that we didn't need permission, that the walls of the building were not national treasures - we should just mount it... maybe try some Command Strips.  Yes, great minds...  but what about getting fired after putting holes in the walls scares one from doing the right thing?  How hard is it to explain to the Texas Work Force Commission when they ask why you were fired?

The leader understood that if I asked the building facilities manager that I might get denied - but if he asked for a favor... it would get done.  That very day, Mike had the facilities manager looking at the board and the wall (a 15-20 minute conversation).  Are you starting the clock?  It's Dec 7th, lead time starts when Mike agreed to the team's request.

The team was excited, it looked like their desire was going to be granted.  Productive would flourish again.

Over the next few days I would see various people looking up at the wall and down at the basketball goal on the floor.  There were about 4 of these meetings each very short and not always the same people.  Team members would come up to me afterwards and ask...  "are we still getting the goal?"... "when are they going to bring a drill?"...  "what's taking so long?"

Running the calendar forward a bit... Today the facilities guy showed up with a ladder and drill.  It took about 20 minutes.  Basketball goal mounted (Dec 13th) - which clock did you stop?  All of the clocks stop when the customer (team) has their product (basketball goal) in production (a game commences).

I choose to think of lead time as the time it takes an agreed upon product or service order to be delivered.  In this example that starts when Mike, the dude, agreed to help the team get their goal mounted.

In this situation I want to think of cycle time as the time that people worked to produce the product (mounted goal) - other's might call this process time (see Lean Glossary).  And so I estimated the time that each meeting on the court looking at the unmounted goal took, plus the actual time to mount  the goal (100 minutes).  Technically cycle time is per unit of product - since in the software world we typically measure per story and each story is some what unique - it's not uncommon to drop the per unit aspect of cycle time.

Lead time:  Dec 13th minus Dec 7th = 5 work days
Cycle time:  hash marks //// (4)  one for each meeting at the board to discuss mounting techniques (assume 20 m. each); and about 20 minutes with ladder and drill;  total 100 minutes

Lead Time 5 days; Cycle Time 100 minutes
This lead to a conversation on the court - under the new goal with a few team members about what we could do with these measurements.  How if one's job was to go around and install basketball goals for every team in the building that a cycle time of 100 minutes with a lead time of 5 days might make the customers a bit unhappy.   Yet for a one off, unusual once a year sort of request that ratio of 100 minutes to 5 days was not such a bad response time.  The customer's were very happy in the end, although waiting for 5 days did make them a bit edgy.

But now what would happen if we measured our software development cycle time and lead time - would our (business) customers be happy?  Do we produce a once in a year product? (Well yes - we've yet to do a release.) Do our lead times have similar ratios to cycle time, with very little value add time (process time)?

Pondering...

Well it's January 5th and this example came up in a Scrum Master's Forum meeting.  After telling the tale we still did not agree on when to start and stop the two watches for Lead Time and Cycle Time.  Maybe this is much harder than I thought.  Turns out I'm in the minority of opinions - I'm doing it wrong!

Could you help me figure out why my view point is wrong?  Comment below, please.

LeanKit just published an article on this topic - it's very good but might also misinterpret cycle time.  I see no 'per unit' in their definition of cycle time.  The Lead Time and Cycle Time Debate: When Does the Clock Start? by Tommy Norman.

An Experiment in measuring the team's cycle time:
After a bit of time reflecting, debating, arguing with colleagues and other agilitst online I've decided to publish a little experiment in measuring cycle-time on a scrum team.  Here's the data... what does it say?  How do you think the team should react?  What action should be next?  What should the team's leadership feel/think/do?

The Story:  This team has been working together for a while.  The sprints are numbered from the start of the year... an interesting practice, this team uses 2 week sprints, is practicing Scrum.  Took a nice holiday and required some priming to get back in the swing of things after the first of the year (you see this in the trend of stories completed each sprint).  Cycle Time for a story on trend is longer than the sprint, this correlates with typical story "carry-over" (a story started is not finished in one sprint and is carried over to the next sprint).  Generally a story is finished in the sprint but not in sequence or priority - they all take at least the full sprint to get to done.  There is no correlation of story size to cycle time.

Now those are the facts more or less -- let us see what insights we might create from this cycle time info.  With no correlation of story size to cycle time AND little consistency of number of stories finished in a sprint (trend of # of stories: 1, 6, 7, 2, 2). The question arrises - what is the controlling variable that is not being measured that effects the time it takes to get from start to finish with a story?  Now that the team can see that the simplest things we could track do not have a strong effect on the length of time (or the through-put) a story requires... and that means the process is not under good control - we can start to look around for some of the uncontrolled (invisible factors) -- if we a courageous enough!

We reflected that many of the stories that carry over and are virtually unpredictable in size/time/effort appear to have large delays or multiple delays within their implementation phase.  So we devised a quick and dirty way to track this delay.  The assumption that this delay inherent in the work will perhaps be the unmeasured / uncontrolled variable that throws the correlation of story size with cycle-time out of kilter.

Our devised technique for tracking delay per story - a yellow dot on the task with a tick mark for every day the task is stuck in-process (delayed).





See Also:

LeanKit published this excellent explanation of their choices in calculating cycle time within their tool:  Kanban Calculations: How to Calculate Cycle Time by Daniel Vacanti.
LeanKit Lead Time Metrics: Why Weekends Matter
Elon Musk turns a tweet into reality in 6 days by Loic Le Meur
The ROI of Multiple Small Releaseshttps://en.wikipedia.org/wiki/Frank_Bunker_Gilbreth_Sr.
https://en.wikipedia.org/wiki/Cheaper_by_the_Dozen


The Hummingbird Effect: How Galileo Invented Timekeeping and Forever Changed Modern Life
by Maria Popova.  How the invisible hand of the clock powered the Industrial Revolution and sparked the Information Age.

Categories: Blogs

April Fools Ideas for Techies

About SCRUM - Hamid Shojaee Axosoft - Wed, 03/22/2017 - 17:28

I am not much of an April Fool’s Day fan. Not everyone can achieve the giddy heights of, say, the BBC’s oldie but goodie prank on its viewers 60 years ago, and attempts to do so can be sadly ineffective. Nonetheless, April 1st is almost upon us, and as we welcome Spring back into our lives, we must also endure the one day that our workplace prankster (we’ll call him Tad) comes into his or her own.

It doesn’t have to be this way. You can strike at Tad first, as long as you have a few ideas up your sleeve. Here are some classic and alternative ways to make sure that Tad—and his silly unmatching socks and zany ties—don’t mess with you in the future.

WARNING: The following pranks can result in being:

  1. Tame to the point of failure: Prank is so tame it may genuinely go unnoticed, forever, with no consequences.
  2. Pretty underwhelming: Tad can easily shake it off.
  3. Mildly-to-actually annoying: This is the level you’re shooting for. These pranks hopefully disrupt Tad’s flow for long enough to give him a taste of his own medicine.
  4. Instant dismissal: Tad’s a real bore, but that doesn’t make it a good idea to pants him in the cafeteria or toss his laptop over the side of the balcony.
  5. Arrest: This includes but is not limited to: murder (murder is not a prank), violence, aggressive nudity, arson.

Ok, let’s get this out of the way so we can carry on enjoying our lives!

NOTE: A couple of these pranks are aimed at MacOS users, although they’re likely achievable on other platforms. They also require access to Tad’s work machine. To gain the element of surprise, consider a diversion tactic of your choice, or implement one or more of these pranks early (or even better, in August). 1. The low hanging fruit: Chrome Extensions

A good prank needn’t take the week to plan, and for the half-hearted pranksters out there, most of the heavy lifting has been done for you through the gift of Chrome extensions. It turns out there are a lot of good ones out there, but here are a few to get your creative juices flowing:

  • April First Prank Toolkit: You’ll be shocked to hear that April First Prank Toolkit is a toolkit specifically aimed at pranking people on April first. It has a bunch of options for easy-yet-effective prankage; some pranks are more subtle than others (and therefore, IMO, more effective as slow-burner pranks). These include hiding the cursor and randomly reloading tabs–actions that make it less immediately obvious that your victim has fallen fowl to a dastardly prank!
  • Prank ‘Em: Another toolkit; this one allows for subtle irritations such as a flash on the screen. Every setting has a frequency slider too, so you can make things happen intermittently for better covert prankular tactics.
  • Cenafy: This does one thing and one thing only (1/100 of the time): Gives you John Cena. It’s infrequent enough to be another winning slow-burner.
2. Keyboard typing replacements MacOS only

This one is short and sweet. Go to System Preferences > Keyboard and click the Text tab. Click the + symbol. From there you can replace ‘the’ with ‘teh’, for example.

NOTE: This replacement doesn’t work on every application, so you may have to be patient to see the fruits of your labors.

3. Application Overload MacOS only

Let’s get a bit more serious. In this prank, we’re going to create an AppleScript application that launches applications in a way that Tad isn’t expecting. When he launches Word, he expects Word to launch just once! Classic Tad. Let’s say you want to launch 20 instances of Word instead. Here goes:

  1. Open Script Editor (found in Applications Utilities)
  2. Select New Document
  3. Add the following script:
    on run
        set wordPath to "/Applications/Microsoft Office 2011/Microsoft Word.app"
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
    end run
  4. What does that do? Well, it assigns the path of Word to a variable. We can then run a shell script to open that path and activate the app. Now, lines 3 and 4 are crucial, because we’re going to call them again—19 more times (or however many you want to do:
    on run
        set wordPath to "/Applications/Microsoft Office 2011/Microsoft Word.app"
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
    end run
  5. Go to File Save
  6. Choose File Format: Application and call it “Microsoft Word”. Save it in a discrete location

  1. Hit Save
  2. You now have an app that will run Word 20 times when opened. But it still looks like a script!

  1. Find the REAL MS Word, right-click and select Get Info
  2. Click the application icon in the top-right and hit command + c
  3. Find your script, get info and click the script’s icon
  4. Hit command + v
  5. You now have an app that looks just like Word. Replace the actual Word icon in the dock and you’re set! This can be adapted to suit your needs. Maybe only have it launch Word twice—that’s enough to be a mild yet persistent annoyance. Or, have it open a nice selection of other applications.

4. Grunt Task Tomfoolery

This one’s a tad (ha!) obscure, but might be fun. WARNING: It could also be a fireable offense if your prank somehow makes it to production. You have been warned!

Let’s say Tad is working on a bunch of projects (typical Tad), and he uses Grunt to automate his tasks and build his websites. Open up one of his projects in the terminal and type:

npm install grunt-string-replace

Now open up Tad’s Gruntfile and add the task:

grunt.loadNpmTasks('grunt-string-replace');

In his build tasks, add 'string-replace' to the array of tasks.

Next, define that task! In the following example we’re focusing on the build folder, and it’s last in our list of tasks (this ensures that our hard work isn’t overwritten by subsequent tasks). We’re searching for all html files and replacing ‘the’ with ‘teh’ using a regular expression (defining a string as a pattern will only change the first instance).

'string-replace': {
    dist: {
        files: [{
            expand: true,
            cwd: 'build/',
            src: '**/*.html',
            dest: 'build/'
        }], 
        options: {
            replacements: [{
                pattern: /the /g,
                replacement: 'teh '
            }]
        }
    }
}

That’s it. Sit back and wait for Tad to run a build task. Of course, you could get more inventive with the replacements and perhaps only target one or two files. However, most importantly, remember that I never, ever, recommended you do this to anyone. Ever. Don’t do it.

Categories: Companies

Doors Now Open to the Better User Stories Advanced Video Training

This past week we’ve given away free online training and a number of resources to help you combat some of the most vexing problems agile teams encounter when writing user stories.

Now it’s time to open the doors to the full course: Better User Stories.

“In my 30 years of IT experience, this class has without question provided the most ‘bang for buck’ of any previous training course I have ever attended. If you or your organization are struggling with user stories, then this class is absolutely a must have. I simply can’t recommend it enough. 5 Stars!!” - Douglas Tooley

If you watched and enjoyed the free videos, you’ll love Better User Stories. It’s much more in-depth, with 9 modules of advanced training, worksheets, lesson transcripts, audio recordings, bonus materials, and quizzes to help cement the learning.

Registration for Better User Stories will only be open for one week

Click here to read more about the course and reserve your seat.

Because of the intense level of interest in this course, we’re expecting a large numbers of people to sign-up. That’s why we’re only opening the doors for one week, so that we have the time and resources to get everyone settled.

If demand is even higher than we expect, we may close the doors early, so if you already know you’re interested, the next step is to:

Choose one of 3 levels of access. Which is right for you?

I know when it comes to training, everyone has different needs, objectives, learning preferences and budgets.

That’s why you can choose from 3 levels of access when you register:

  • Professional - Get the full course with lifetime access to all materials and any future upgrades
  • Expert Access - Acquire the full course and become part of the Better User Stories online community, where you can discuss ideas, share tips and submit questions to live Q+A calls with Mike
  • Work With Mike - Secure all of the above, plus private, 1:1 time with Mike to work through any specific issues or challenges.

Click here to choose the best level for your situation

What people are already saying

We recently finished a beta launch where a number of agilists worked through all 9 modules, providing feedback along the way. This let us tweak, polish and finish the course to make it even more practical and valuable.

Here’s what people had to say:

Anne Aaroe

Thank you for an amazing course. Better User Stories is by far the best course I have had since I started my agile journey back in 2008.

Anne Aaroe

Packed full of humor, stories, and exercises the course is easy to take at one’s own leisure. Mike Cohn has a way of covering complex topics such as splitting user stories with easy to understand acronyms, charts and reinforces these concepts with quizzes and homework that really bring the learning objectives to life. So, whether you’re practicing scrum or just looking to learn more about user stories this course will provide you the roadmap needed to improve at any experience level, at a cost that everyone can appreciate.

Aaron Corcoran

Click here to read a full description of the course, and what you get with each of the 3 levels of access. Questions about the course?

Let me know in the comments below.

Categories: Blogs

Deprecating private impediments

TargetProcess - Edge of Chaos Blog - Tue, 03/21/2017 - 17:01

Good day everyone!

In our efforts to continuously improve the Targetprocess experience for you, we're analyzing the performance of some core features, such as visualizing your data on dozens of different views or accessing that data through our API. It's a well-known fact in the software engineering industry that every feature comes with a cost. Unfortunately, sometimes the features we build become obsolete or just don't fire off at all. In a perfect world, such features would be free or extremely cheap to maintain and we could simply ignore them. However, the real world is much more cruel, and quite often there is a cost associated with the ongoing support of these features.

Our "private impediments" feature is a good example of this. According to our analysis, its usage is close to none, but it adds a significant performance overhead to our data querying operations, most notably for inbound/outbound relations lookup. Therefore, we'd like to remove private impediments from Targetprocess in our upcoming release.

So, what does this mean for you?

If you don't use impediments at all, then nothing changes for you. If you use impediments but don't use the "Private" flag on them, then once again nothing changes for you. If you have private impediments, they will be deleted from your Targetprocess account, unless you make them public before the new release.

Wait, what? Are you really going to delete my private impediments?!

Well, yeah, but we've thought this through. There are basically 2 options: either delete them, or make them public. We assumed that it would be terrible to make someone's private data publicly visible. Also, given the fact that the private impediment usage is quite low, and we also continuously make backups for our on-demand instances, we'd be able to restore the data for individual customers if you ask us to.

Hopefully, this all makes sense for you. Don't hesitate to get in touch and contact our support if you have any questions!

Categories: Companies

Axosoft Dev Talk: React and Redux

About SCRUM - Hamid Shojaee Axosoft - Tue, 03/21/2017 - 16:32

In this talk, GitKraken developer Tyler Wanek discusses React and Redux, and their uses for one-way dataflow. Tyler will be using the following repo as an example:

https://github.com/implausible/React-Redux-SoDa-Demo

Fork and play along!

Part 1

Part 2

Part 3

Categories: Companies

How to Add the Right Amount of Detail to User Stories

Today’s post introduces the third installment in a free series of training videos all about user stories. Available for a limited time only, you can watch all released videos by signing up to the Better User Stories Mini-Course. Already signed up? Check your inbox for a link to the latest video, or continue reading to find out about today’s lesson.

An extremely common problem with user stories is including the right amount of detail.

If you include too much detail in user stories this makes story writing take longer than it would otherwise. As with so many activities in the business world, we want to guard against spending more time on something than necessary.

Also, spending time adding too much detail leads to slower development as tasks like design, coding, and testing do not start until the details have been added. This delay also means it takes longer for the team and its product owner to get feedback from users and stakeholders.

But adding too little detail can lead to different but equally frustrating problems. Leave out detail and the team may struggle to fully implement a story during a sprint as they instead spend time seeking answers.

With too little detail, there’s also an increased chance the development team will go astray on a story by filling in the gaps with what they think is needed rather than asking for clarification.

There’s danger on both sides.

But, when you discover how much detail to add to your stories, it’s like Goldilocks finding the perfect bowl of porridge. Not too much, not too little, but just right.

But how do you discover how much is the right amount?

You can learn how in a new, 13-minute free video training I’ve just released. It’s part of the Better User Stories Mini-Course. To watch the free video, simply sign up here and you’ll get instant access.

Remember, if you’ve already signed up to the course you don’t need to sign in again, just go to www.betteruserstories.com and video #3 will already be unlocked for you.

Adding the right amount of detail--not too much, not too little--is one of the best ways to improve how your team works with user stories. I’m confident this new video will help.

Mike

P.S. This video is only going to be available for a very short period. I encourage you to watch it now at www.betteruserstories.com.

Categories: Blogs

Scrum Days Poland, Warsaw, Poland, June 5-6 2017

Scrum Expert - Tue, 03/21/2017 - 10:00
Scrum Days Poland is a two-day conference focused on Scrum and Agile project management approaches. It aims to create an environment where people can meet, build social networks, do business and have...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Retromat – Random Scrum Retrospectives Plan Generator

Scrum Expert - Mon, 03/20/2017 - 19:44
Retromat is a free online website that allows to generate random plans for Agile and Scrum retrospectives. Out of a pool of more than 100 activities, it selects one for each of the five phases (stage...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Learning Tools beginning Exponential Growth

Learning tools are beginning to benefit from the exponential growth process of knowledge creation and transference.  Here is Seeing Theory an example of this.  Did you do well in Probability and Stats in school?  Funny enough I can predict with astonishing accuracy that a majority of you said "NO!"  I also struggled with those courses, may have repeated it a time or two.  But now with some age I find it much more fascination to study this subject.


Seeing Theory Leaning Site"By 2030 students will be learning from robot teachers 10 times faster than today" by World Economic Forum.


This is what happens when humans debate ethics in front of a super intelligent learning AI.






See Also:

TED Radio Hour : NPR : Open Source World  Tim Berners-Lee tells the story of how Gopher killed it's user base progress and CERN declared their WWW open source in 1993 April 30th, insuring it would continue to prosper.  And was it's growth exponential?


Categories: Blogs

Velocity Calculus - The mathematical study of the changing software development effort by a team


In the practice of Scrum many people appear to have their favorite method of calculating the team's velocity. For many, this exercise appears very academic. Yet when you get three people and ask them you will invariability get more answers than you have belly-buttons.


Velocity—the rate of change in the position of an object; a vector quantity, with both magnitude and direction. “Calculus is the mathematical study of change.” — Donald Latorre 
This pamphlet describes the method I use to teach beginning teams this one very important Scrum concept via a photo journal simulation.

Some of the basic reasons many teams are "doing it wrong"... (from my comment on Doc Norton's FB question: Hey social media friends, I am curious to hear about dysfunctions on agile teams related to use of velocity. What have you seen?


  • mgmt not understanding purpose of Velocity empirical measure;
  • teams using some bogus statistical manipulation called an average without the understanding of the constrains that an average is valid within;
  • SM allowing teams to carry over stories and get credit for multiple sprints within one measurement (lack of understanding of empirical);
  • pressure to give "credit" for effort but zero results - culture dynamic viscous feedback loop;
  • lack of understanding of the virtuous cycle that can be built with empirical measurement and understanding of trends;
  • no action to embrace the virtuous benefits of a measure-respond-adapt model (specifically story slicing to appropriate size)
... there's 6 - but saving the best for last:
  • breaking the basic tenants of the scrum estimation model - allow me to expand for those who have already condemned me for violating written (or suggesting unwritten) dogma...
    • a PBL item has a "size" before being Ready (a gate action) for planning;
    • the team adjusts the PBL item size any/ever time they touch the item and learn more about it (like at planning/grooming);
    • each item is sized based on effort/etc. from NOW (or start of sprint - a point in time) to DONE (never on past sunk cost effort);
    • empirical evidence and updated estimates are a good way to plan;
  • therefore carryover stories are resized before being brought into the next sprint - also reprioritized - and crying over spilt milk or lost effort credit is not allowed in baseball (or sprint planning)

Day 1 - Sprint Planning
A simulated sprint plan with four stories is developed. The team forecast they will do 26 points in this sprint.




Day 2
The team really gets to work.




Day 3
Little progress is visible, concern starts to show.


Day 4Do you feel the sprint progress starting to slide out of control?



Day 5About one half of the schedule is spent, but only one story is done.



Day 6The team has started work on all four stories, will this amount of ‘WIP’ come back to hurt them?




Day 7
Although two stories are now done, the time box is quickly expiring.


Day 8
The team is mired in the largest story.



Day 9The output of the sprint is quite fuzzy. What will be done for the demo, what do we do with the partially completed work?


Day 10
The Sprint Demo day. Three stories done (A, B, & D) get demoed to the PO and accepted.



Close the SprintCalculate the Velocity - a simple arithmetic sum.



Story C is resized given its known state and the effort to get it from here to done. 



What is done with the unfinished story? It goes back into the backlog and is ordered and resized.



Backlog grooming (refinement) is done to prepare for the next sprint planning session.





Trophies of accomplishments help motivation and release planning. Yesterday’s weather (pattern) predicts the next sprints velocity.


Sprint 2 Begins with Sprint PlanningDay 1Three stories are selected by the team.  Including the resized (now 8 points) story C.

Day 2
Work begins on yet another sprint.


Day 3
Work progresses on story tasks.


The cycles of days repeats and the next sprint completes.


Close Sprint 2Calculate the Velocity - a simple arithmetic sum.


In an alternative world we may do more complex calculus. But will it lead us to better predictability?

In this alternative world one wishes to receive partial credit for work attempted.  Yet the story was resized based upon the known state and getting it to done.




Simplicity is the ultimate sophistication. — Leonardo di Vinci 
Now let’s move from the empirical world of measurement and into the realm of lies.








Simply graphing the empirical results and using the human eye & mind to predict is more accurate than many peoples math.




Velocity is an optimistic measure. An early objective is to have a predictable team.

Velocity may be a good predictor of release duration. Yet it is always an optimistic predictor.




Variance Graphed: Pessimistic projection (red line) & optimistic projection (green line) of release duration.



While in the realm of fabrication of information — let’s better describe the summary average with it’s variance.








Categories: Blogs

Works on my Machine

Leading Agile - Fri, 03/17/2017 - 13:00

One of the most insidious obstacles to continuous delivery (and to continuous flow in software delivery generally) is the works-on-my-machine phenomenon. Anyone who has worked on a software development team or an infrastructure support team has experienced it. Anyone who works with such teams has heard the phrase spoken during (attempted) demos. The issue is so common there’s even a badge for it:

Perhaps you have earned this badge yourself. I have several. You should see my trophy room.

There’s a longstanding tradition on Agile teams that may have originated at ThoughtWorks around the turn of the century. It goes like this: When someone violates the ancient engineering principle, “Don’t do anything stupid on purpose,” they have to pay a penalty. The penalty might be to drop a dollar into the team snack jar, or something much worse (for an introverted technical type), like standing in front of the team and singing a song. To explain a failed demo with a glib “<shrug>Works on my machine!</shrug>” qualifies.

It may not be possible to avoid the problem in all situations. As Forrest Gump said…well, you know what he said. But we can minimize the problem by paying attention to a few obvious things. (Yes, I understand “obvious” is a word to be used advisedly.)

Pitfall 1: Leftover configuration

Problem: Leftover configuration from previous work enables the code to work on the development environment (and maybe the test environment, too) while it fails on other environments.

Pitfall 2: Development/test configuration differs from production

The solutions to this pitfall are so similar to those for Pitfall 1 that I’m going to group the two.

Solution (tl;dr): Don’t reuse environments.

Common situation: Many developers set up an environment they like on their laptop/desktop or on the team’s shared development environment. The environment grows from project to project, as more libraries are added and more configuration options are set. Sometimes the configurations conflict with one another, and teams/individuals often make manual configuration adjustments depending on which project is active at the moment. It doesn’t take long for the development configuration to become very different from the configuration of the target production environment. Libraries that are present on the development system may not exist on the production system. You may run your local tests assuming you’ve configured things the same as production only to discover later that you’ve been using a different version of a key library than the one in production. Subtle and unpredictable differences in behavior occur across development, test, and production environments. The situation creates challenges not only during development, but also during production support work when we’re trying to reproduce reported behavior.

Solution (long): Create an isolated, dedicated development environment for each project

There’s more than one practical approach. You can probably think of several. Here are a few possibilities:

  • Provision a new VM (locally, on your machine) for each project. (I had to add “locally, on your machine” because I’ve learned that in many larger organizations, developers must jump through bureaucratic hoops to get access to a VM, and VMs are managed solely by a separate functional silo. Go figure.)
  • Do your development in an isolated environment (including testing in the lower levels of the test automation pyramid), like Docker or similar.
  • Do your development on a cloud-based development environment that is provisioned by the cloud provider when you define a new project.
  • Set up your continuous integration (CI) pipeline to provision a fresh VM for each build/test run, to ensure nothing will be left over from the last build that might pollute the results of the current build.
  • Set up your continuous delivery (CD) pipeline to provision a fresh execution environment for higher-level testing and for production, rather than promoting code and configuration files into an existing environment (for the same reason). Note that this approach also gives you the advantage of linting, style-checking, and validating the provisioning scripts in the normal course of a build/deploy cycle. Convenient.

All those options won’t be feasible for every conceivable platform or stack. Pick and choose, and roll your own as appropriate. In general, all these things are pretty easy to do if you’re working on Linux. All of them can be done for other *nix systems with some effort. Most of them are reasonably easy to do with Windows; the only issue there is licensing, and if your company has an enterprise license, you’re all set. For other platforms, such as IBM zOS or HP NonStop, expect to do some hand-rolling of tools.

Anything that’s feasible in your situation and that helps you isolate your development and test environments will be helpful. If you can’t do all these things in your situation, don’t worry about it. Just do what you can do.

Provision a new VM locally

If you’re working on a desktop, laptop, or shared development server running Linux, FreeBSD, Solaris, Windows, or OSX, then you’re in good shape. You can use virtualization software such as VirtualBox or VMware to stand up and tear down local VMs at will. For the less-mainstream platforms, you may have to build the virtualization tool from source.

One thing I usually recommend is that developers cultivate an attitude of laziness in themselves. Well, the right kind of laziness, that is. You shouldn’t feel perfectly happy provisioning a server manually more than once. Take the time during that first provisioning exercise to script the things you discover along the way. Then you won’t have to remember them and repeat the same mis-steps again. (Well, unless you enjoy that sort of thing, of course.)

For example, here are a few provisioning scripts that I’ve come up with when I needed to set up development environments. These are all based on Ubuntu Linux and written in Bash. I don’t know if they’ll help you, but they work on my machine.

If your company is running RedHat Linux in production, you’ll probably want to adjust these scripts to run on CentOS or Fedora, so that your development environments will be reasonably close to the target environments. No big deal.

If you want to be even lazier, you can use a tool like Vagrant to simplify the configuration definitions for your VMs.

One more thing: Whatever scripts you write and whatever definition files you write for provisioning tools, keep them under version control along with each project. Make sure whatever is in version control for a given project is everything necessary to work on that project…code, tests, documentation, scripts…everything. This is rather important, I think.

Do your development in a container

One way of isolating your development environment is to run it in a container. Most of the tools you’ll read about when you search for information about containers are really orchestration tools intended to help us manage multiple containers, typically in a production environment. For local development purposes, you really don’t need that much functionality. There are a couple of practical containers for this purpose:


These are Linux-based. Whether it’s practical for you to containerize your development environment depends on what technologies you need. To containerize a development environment for another OS, such as Windows, may not be worth the effort over just running a full-blown VM. For other platforms, it’s probably impossible to containerize a development environment.

Develop in the cloud

This is a relatively new option, and it’s feasible for a limited set of technologies. The advantage over building a local development environment is that you can stand up a fresh environment for each project, guaranteeing you won’t have any components or configuration settings left over from previous work. Here are a couple of options:


Expect to see these environments improve, and expect to see more players in this market. Check which technologies and languages are supported so see whether one of these will be a fit for your needs. Because of the rapid pace of change, there’s no sense in listing what’s available as of the date of this article.

Generate test environments on the fly as part of your CI build

Once you have a script that spins up a VM or configures a container, it’s easy to add it to your CI build. The advantage is that your tests will run on a pristine environment, with no chance of false positives due to leftover configuration from previous versions of the application or from other applications that had previously shared the same static test environment, or because of test data modified in a previous test run.

Many people have scripts that they’ve hacked up to simplify their lives, but they may not be suitable for unattended execution. Your scripts (or the tools you use to interpret declarative configuration specifications) have to be able to run without issuing any prompts (such as prompting for an administrator password). They also need to be idempotent (that is, it won’t do any harm to run them multiple times, in case of restarts). Any runtime values that must be provided to the script have to be obtainable by the script as it runs, and not require any manual “tweaking” prior to each run.

The idea of “generating an environment” may sound infeasible for some stacks. Take the suggestion broadly. For a Linux environment, it’s pretty common to create a VM whenever you need one. For other environments, you may not be able to do exactly that, but there may be some steps you can take based on the general notion of creating an environment on the fly.

For example, a team working on a CICS application on an IBM mainframe can define and start a CICS environment any time by running it as a standard job. In the early 1980s, we used to do that routinely. As the 1980s dragged on (and continued through the 1990s and 2000s, in some organizations), the world of corporate IT became increasingly bureaucratized until this capability was taken out of developers’ hands.

Strangely, as of 2017 very few development teams have the option to run their own CICS environments for experimentation, development, and initial testing. I say “strangely” because so many other aspects of our working lives have improved dramatically, while that aspect seems to have moved in retrograde. We don’t have such problems working on the front end of our applications, but when we move to the back end we fall through a sort of time warp.

From a purely technical point of view, there’s nothing to stop a development team from doing this. It qualifies as “generating an environment,” in my view. You can’t run a CICS system “in the cloud” or “on a VM” (at least, not as of 2017), but you can apply “cloud thinking” to the challenge of managing your resources.

Similarly, you can apply “cloud thinking” to other resources in your environment, as well. Use your imagination and creativity. Isn’t that why you chose this field of work, after all?

Generate production environments on the fly as part of your CD pipeline

This suggestion is pretty much the same as the previous one, except that it occurs later in the CI/CD pipeline. Once you have some form of automated deployment in place, you can extend that process to include automatically spinning up VMs or automatically reloading and provisioning hardware servers as part of the deployment process. At that point, “deployment” really means creating and provisioning the target environment, as opposed to moving code into an existing environment.

This approach solves a number of problems beyond simple configuration differences. For instance, if a hacker has introduced anything to the production environment, rebuilding that environment out of source that you control eliminates that malware. People are discovering there’s value in rebuilding production machines and VMs frequently even if there are no changes to “deploy,” for that reason as well as to avoid “configuration drift” that occurs when we apply changes over time to a long-running instance.

Many organizations run Windows servers in production, mainly to support third-party packages that require that OS. An issue with deploying to an existing Windows server is that many applications require an installer to be present on the target instance. Generally, information security people frown on having installers available on any production instance. (FWIW, I agree with them.)

If you create a Windows VM or provision a Windows server on the fly from controlled sources, then you don’t need the installer once the provisioning is complete. You won’t re-install an application; if a change is necessary, you’ll rebuild the entire instance. You can prepare the environment before it’s accessible in production, and then delete any installers that were used to provision it. So, this approach addresses more than just the works-on-my-machine problem.

When it comes to back end systems like zOS, you won’t be spinning up your own CICS regions and LPARs for production deployment. The “cloud thinking” in that case is to have two identical production environments. Deployment then becomes a matter of switching traffic between the two environments, rather than migrating code. This makes it easier to implement production releases without impacting customers. It also helps alleviate the works-on-my-machine problem, as testing late in the delivery cycle occurs on a real production environment (even if customers aren’t pointed to it yet).

The usual objection to this is the cost (that is, fees paid to IBM) to support twin environments. This objection is usually raised by people who have not fully analyzed the costs of all the delay and rework inherent in doing things the “old way.”

Pitfall 3: Unpleasant surprises when code is merged

Problem: Different teams and individuals handle code check-out and check-in in various ways. Some check out code once and modify it throughout the course of a project, possibly over a period of weeks or months. Others commit small changes frequently, updating their local copy and committing changes many times per day. Most teams fall somewhere between those extremes.

Generally, the longer you keep code checked out and the more changes you make to it, the greater the chances of a collision when you merge. It’s also likely that you will have forgotten exactly why you made every little change, and so will the other people who have modified the same chunks of code. Merges can be a hassle.

During these merge events, all other value-add work stops. Everyone is trying to figure out how to merge the changes. Tempers flare. Everyone can claim, accurately, that the system works on their machine.

Solution: A simple way to avoid this sort of thing is to commit small changes frequently, run the test suite with everyone’s changes in place, and deal with minor collisions quickly before memory fades. It’s substantially less stressful.

The best part is you don’t need any special tooling to do this. It’s just a question of self-discipline. On the other hand, it only takes one individual who keeps code checked out for a long time to mess everyone else up. Be aware of that, and kindly help your colleagues establish good habits.

Pitfall 4: Integration errors discovered late

Problem: This problem is similar to Pitfall 3, but one level of abstraction higher. Even if a team commits small changes frequently and runs a comprehensive suite of automated tests with every commit, they may experience significant issues integrating their code with other components of the solution, or interacting with other applications in context.

The code may work on my machine, as well as on my team’s integration test environment, but as soon as we take the next step forward, all hell breaks loose.

Solution: There are a couple of solutions to this problem. The first is static code analysis. It’s becoming the norm for a continuous integration pipeline to include static code analysis as part of every build. This occurs before the code is compiled. Static code analysis tools examine the source code as text, looking for patterns that are known to result in integration errors (among other things).

Static code analysis can detect structural problems in the code such as cyclic dependencies and high cyclomatic complexity, as well as other basic problems like dead code and violations of coding standards that tend to increase cruft in a codebase. It’s just the sort of cruft that causes merge hassles, too.

A related suggestion is to take any warning-level errors from static code analysis tools and from compilers as real errors. Accumulating warning-level errors is a great way to end up with mysterious, unexpected behaviors at runtime.

The second solution is to integrate components and run automated integration test suites frequently. Set up the CI pipeline so that when all unit-level checks pass, then integration-level checks are executed automatically. Let failures at that level break the build, just as you do with the unit-level checks.

With these two methods, you can detect integration errors as early as possible in the delivery pipeline. The earlier you detect a problem, the easier it is to fix.

Pitfall 5: Deployments are nightmarish all-night marathons

Problem: Circa 2017 it’s still common to find organizations where people have “release parties” whenever they deploy code to production. Release parties are just like all-night frat parties, only without the fun.

The problem is that the first time applications are executed in a production-like environment is when they are executed in the real production environment. Many issues only become visible when the team tries to deploy to production.

Of course, there’s no time or budget allocated for that. People working in a rush may get the system up and running somehow, but often at the cost of regressions that pop up later in the form of production support issues.

And it’s all because at each stage of the delivery pipeline, the system “worked on my machine,” whether a developer’s laptop, a shared test environment configured differently from production, or some other unreliable environment.

Solution: The solution is to configure every environment throughout the delivery pipeline as close to production as possible. The following are general guidelines that you may need to modify depending on local circumstances.

If you have a staging environment, rather than twin production environments, it should be configured with all internal interfaces live and external interfaces stubbed, mocked, or virtualized. Even if this is as far as you take the idea, it will probably eliminate the need for release parties. But if you can, it’s good to continue upstream in the pipeline, to reduce unexpected delays in promoting code along.

Test environments between development and staging should be running the same version of the OS and libraries as production. They should be isolated at the appropriate boundary based on the scope of testing to be performed.

At the beginning of the pipeline, if it’s possible develop on the same OS and same general configuration as production. It’s likely you will not have as much memory or as many processors as in the production environment. The development environment also will not have any live interfaces; all dependencies external to the application will be faked.

At a minimum, match the OS and release level to production as closely as you can. For instance, if you’ll be deploying to Windows Server 2016, then use a Windows Server 2016 VM to run your quick CI build and unit test suite. Windows Server 2016 is based on NT 10, so do your development work on Windows 10 because it’s also based on NT 10. Similarly, if the production environment is Windows Server 2008 R2 (based on NT 6.1) then develop on Windows 7 (also based on NT 6.1). You won’t be able to eliminate every single configuration difference, but you will be able to avoid the majority of incompatibilities.

Follow the same rule of thumb for Linux targets and development systems. For instance, if you will deploy to RHEL 7.3 (kernel version 3.10.x), then run unit tests on the same OS if possible. Otherwise, look for (or build) a version of CentOS based on the same kernel version as your production RHEL (don’t assume). At a minimum, run unit tests on a Linux distro based on the same kernel version as the target production instance. Do your development on CentOS or a Fedora-based distro to minimize inconsistencies with RHEL.

If you’re using a dynamic infrastructure management approach that includes building OS instances from source, then this problem becomes much easier to control. You can build your development, test, and production environments from the same sources, assuring version consistency throughout the delivery pipeline. But the reality is that very few organizations are managing infrastructure in this way as of 2017. It’s more likely that you’ll configure and provision OS instances based on a published ISO, and then install packages from a private or public repo. You’ll have to pay close attention to versions.

If you’re doing development work on your own laptop or desktop, and you’re using a cross-platform language (Ruby, Python, Java, etc.), you might think it doesn’t matter which OS you use. You might have a nice development stack on Windows or OSX (or whatever) that you’re comfortable with. Even so, it’s a good idea to spin up a local VM running an OS that’s closer to the production environment, just to avoid unexpected surprises.

For embedded development where the development processor is different from the target processor, include a compile step in your low-level TDD cycle with the compiler options set for the target platform. This can expose errors that don’t occur when you compile for the development platform. Sometimes the same version of the same library will exhibit different behaviors when executed on different processors.

Another suggestion for embedded development is to constrain your development environment to have the same memory limits and other resource constraints as the target platform. You can catch certain types of errors early by doing this.

For some of the older back end platforms, it’s possible to do development and unit testing off-platform for convenience. Fairly early in the delivery pipeline, you’ll want to upload your source to an environment on the target platform and buld/test there.

For instance, for a C++ application on, say, HP NonStop, it’s convenient to do TDD on whatever local environment you like (assuming that’s feasible for the type of application), using any compiler and a unit testing framework like CppUnit.

Similarly, it’s convenient to do COBOL development and unit testing on a Linux instance using GnuCOBOL; much faster and easier than using OEDIT on-platform for fine-grained TDD.

But in these cases the target execution environment is very different from the development environment. You’ll want to exercise the code on-platform early in the delivery pipeline to eliminate works-on-my-machine surprises.

Summary

The author’s observation is that the works-on-my-machine problem is one of the leading causes of developer stress and lost time. The author further observes that the main cause of the works-on-my-machine problem is differences in configuration across development, test, and production environments.

The basic advice is to avoid configuration differences to the extent possible. Take pains to ensure all environments are as similar to production as is practical. Pay attention to OS kernel versions, library versions, API versions, compiler versions, and the versions of any home-grown utilities and libraries. When differences can’t be avoided, then make note of them and treat them as risks. Wrap them in test cases to provide early warning of any issues.

The second suggestion is to automate as much testing as possible at different levels of abstraction, merge code frequently, build the application frequently, run the automated test suites frequently, deploy frequently, and (where feasible) build the execution environment frequently. This will help you detect problems early, while the most recent changes are still fresh in your mind, and while the issues are still minor.

Let’s fix the world so that the next generation of software developers doesn’t understand the phrase, “Works on my machine.”

The post Works on my Machine appeared first on LeadingAgile.

Categories: Blogs

Five Simple but Powerful Ways to Split User Stories

Today’s post introduces the second installment in a free series of training videos all about user stories. Available for a limited time only, you can watch all released videos by signing up to the Better User Stories Mini-Course. Already signed up? Check your inbox for a link to the latest video, or continue reading to find out about today’s lesson.

One of the most common struggles faced by agile teams is the need to split user stories. I'm sure you've struggled with this. I certainly did at first.

In fact, when I first began using Scrum, some of our product backlog items were so big that we occasionally opted for six-week sprints. With a bit more experience, though, that team and I saw enough ways to split work that we could have done one-day sprints if we'd wanted.

But splitting stories was hard at first. Really hard.

But I've got some good news for you. Not only have I figured out how to split stories on my own, I've learned how to explain how to do it so that anyone can quickly become an expert.

What I discovered is that almost every story can be split with one of five techniques. Learn those five simple techniques and you're set.

Even better, the five techniques form an easily memorable acronym: SPIDR.

I've just released a new, 20-minute, free video training that describes each of these techniques as part of the Better User Stories Mini-Course. To watch it simply sign up here and you’ll get instant access.

Remember, if you’ve already signed up to the course you don’t need to sign in again, just check your inbox for an email from me with a link to the latest lesson.

Unless you've already cracked the code on splitting stories, you definitely want to learn the five techniques that make up the SPIDR approach by watching this free video training.

Mike

P.S. This video is only going to be available for a very short period. I encourage you to watch it now at https://www.betteruserstories.com.

Categories: Blogs

How to open views with the currently selected Projects and Teams

TargetProcess - Edge of Chaos Blog - Thu, 03/16/2017 - 10:22

Some time ago, we redesigned the Projects-Teams selector. It became a part of views, and the global selector was removed. This helped to standardize and simplify views, but made it impossible to set Projects and Teams just once and navigate through several views with that selection.

To make this scenario work, in v.3.11.0 we’ve implemented a keyboard shortcut that lets you navigate to a new view with the currently selected Projects and Teams.

As usual, you can select the needed Projects and Teams in the selector on the top of the view (this selection won't override the predefined Projects and Teams for this view).

1selectcontext

Explore the data. If you need to navigate to another view with the same selected Projects and Teams, simply hold Alt and click on the view you wish to explore.

3altclick

As a result, the view will be opened using the Projects and Teams that were selected on the previous view.

4viewopened

You can use Alt+Click to navigate through any number of views with the currently selected Projects and Teams.

For more details on how the Projects and Teams selector works, see the guide post.

Categories: Companies

Women In Agile Workshop, August 6, Orlando, USA

Scrum Expert - Thu, 03/16/2017 - 08:00
The the Sunday before the start of the Agile2017 Conference, the Agile Alliance is organizing the Women In Agile Workshop on the theme “Empowering the Changing Face of Agile. The Women In Agile...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Targetprocess v.3.11.0: multiple final states, team mentions, Project selector in Reports, and plenty of fixed bugs.

TargetProcess - Edge of Chaos Blog - Wed, 03/15/2017 - 20:09
Redesigned Settings

We've done some housekeeping over the last few months. As a result, the Targetprocess Settings page has a new look and feel. We hope we've made it more pleasant to work with.

As before, you can reach Settings by clicking the gear icon at the top right corner. 

settings-gear

We've regrouped all settings into logical sections to make it easier to quickly find the options you need. You may need some time to get used to new grouping, but it makes much more sense arranged this way.

We have completely reworked the Tags section. You can read more about this here. We've also removed the Team section from Settings, as it's more easier to assign multiple users to Projects and Teams right from the Project or Team view. You can read more about this here.

Regular users have access to Import and the Diagnostic and Logs section. Admin users will see the following groups:

admin-global-settings

 

Multiple final project states

There used to be no easy way to find out if a closed bug was rejected or fixed.

You can now use multiple final project states for this. For example, let's say I want to set a new kind of resolution for a Bug or Request. I would go to Process Setup > Bug Workflow, set up Rejected and Duplicate bug states, mark them as final, and set the Completed state as the default final state. All 3 states have the semantic of a “final” state — cards in this state will be grayed out, end dates will be set automatically when the entity moves to these states, and filters with 'IsFinal' will be applied. 

Multiple final states are not supported if you use Team Workflows.

Mention Teams in comments

When you start typing several symbols after an '@' symbol, you will now not only see a list of users, but also a list of teams above the users. If you mention a team, then all its members receive an email notification. In the comment, the mentioned Team (or User) becomes a link to the Team's (or User's) view.

Shortcut 'alt+click' opens next View with the current Team/Project context applied

This is a solution that will be helpful for anyone who uses the same set of boards for different Teams or Projects. You can now set your Projects and Teams selection just once and navigate through several views with that selection.

To make such scenarios work, you can do the following: Select the Projects and Teams you wish to see using the selector at the top of the view. To open another view using the selection you just applied, you can hold the ‘Alt’ key and click on the new view in the left menu. For more details visit the post.

Project/Team selector added to Process Control, Cycle Time Distribution, Relations Network charts

You will no longer have to set the proper Project or Team before you go into Reports, or have to close a Report to reach the context menu. Now you have a Team/Project selector right in Report Settings.

reports-project-team-selector

Cumulative Flow and Burn Down charts:

You can now easily change the selected Project using a dropdown list.

project-selector

Project board now shows active and inactive projects in different shades

Inactive project cards are now slightly greyed out, as disabled elements usually look like. Hover your mouse over the card to see its details pop-up; all of these units will be greyed out as well.

inactive-projects-board

CSV import Features

We've improved CSV import a bit. Now when you import a batch of Features, you can map them to a parent Epic so that they get to the right place in your work hierarchy.

Cards and Axes sorting order unified

We groomed sorting inconsistency a bit. Earlier, Release lanes were sorted by Creation Date while Release cards were sorted by Project. That was a bit weird, so now cards or lanes of the same entity type have a unified sorting order - Releases are sorted by Start Date.

Service Desk Widget in Targetprocess

We migrated from UserVoice to our very own Service Desk and updated the 'Contact Us' widget. Now you can post your issues and ideas right from the widget and navigate easily to our Service Desk portal.

screen-shot-2017-03-07-at-5-06-18-pm Fixed Bugs
  • You can now send images and attachments in comments with email notifications
  • Fixed DSL filters to find items with attachments. The following filters will now work: '?attachments.count>0'  and '?Attachments.Where(Name is 'log.txt')'
  • Timesheet: time improperly associated a NULL role if added by a user whose role was not responsible for that entity
  • Fixed possible effort inconsistency when applying metric results
  • Fixed comment losses which occurred if the 'Source' button was clicked twice.
  • Timesheet: Fixed transfers to states which require a comment
  • Fixed Team state transfer for non-admin users, which failed if Team states were mapped to the last Project state.
  • Fixed cache for List views to support complex filters by custom fields
  • Fixed effort recalculation for a User Stories if their Tasks are removed or moved to/from another User Story
  • Fixed Dashboard TODO widget: filter by entity type apply on first load only
  • Fixed Epic > Feature > User Story List views which showed only 25 items and no 'show more' link
  • Fixed Requests and Test Plan Run effort units in a List view to be consistent with the entity views
  • Fixed improper Initial estimate field population with an Effort value when User Story copies to a new Project
  • Fixed export to CSV so that it takes the axes filter into account
  • Added the ability to filter entities by a 'None' option in a Multiple selection field ('?MultipleSelectionCustomField is None)
  • Fixed Quick Add failures if there are no teams in the system yet
  • Fixed mixed Test Plans/Test Cases lists sorting by Business Value column
  • Non-required dropdown custom fields which do not have a default value could be saved with an empty value now
  • 'Last Run date' unit gets back
  • Fixed 'Add & Open' button in quick add in the new inner lists and Relations
  • Improved performance of hierarchical test plan run creation
  • Fixed a calculated custom field creation error in case its formula uses another custom field but with the same name
  • Corrected the error message for an unsuccessful attempt to attach a file that's too large
  • Fixed reply comments that could not be deleted if their parent comment was deleted first
  • SMTP password length limit expanded
  • Fixed Time add from a list in the 'Work Hierarchy' tab
  • Fixed Tabular Report 'Assigned Effort' to show names properly (when there are multiple users assigned under the same role)
  • Fixed CSV export: checkbox custom field 'false' value used to export as 'null'
  • Fixed occasional failure to save Description changes which would sometimes occur even when only one user edited it
  • Fixed issue with renaming tags
Categories: Companies

Scrum Knowledge Sharing

SpiraPlan is a agile project management system designed specifically for methodologies such as scrum, XP and Kanban.