While we have made great strides in challenging late integration, lack of collaboration and the obvious need for automation, we still have a ways to go when it comes to ideation, and the dangerous amount of certainty we have when it comes to products and people.Assume Delivery is a Constant
I once had a physics teacher who often said, “Assume acceleration is a constant,” just before he took us into the land of big thinking. Before we stepped too far into the land of complex learning, he tried to reduce the number of variables so we could focus on the more complex aspects ahead. I use this same idea when working with product teams by helping them to work towards delivery as the constant.
Of course delivery is never a constant, but it is tangible and often deterministic. Eco-systems where teams build and sustain adaptive eco-systems with well structured code, high levels of automation, rich collaboration and strong visualizations tend to do well with delivery, and often learn to deliver more than ever before. Thoughtful and aware teams (and programs) quickly realize that as product delivery becomes a constant, product discovery looms large on the horizon, and it is a land that’s messy, clumsy, non-linear and non-deterministic. Best practices rarely help.Product Arrogance
So, for a minute, assume you’ve made delivery a constant. How sure are you that you are producing the right thing? How do you decide what is your next best investment, and how do you validate your choice? As a tool to help you, I offer of the idea of product arrogance. Inspired by Nassim Taleb‘s use of Epistemic Arrogance in The Black Swan, or “the difference between what you know and what you think you know,” Product Arrogance is simply defined as “The difference between what people really need and what you think they need.”
Now, while you are still assuming delivery is a constant (which is no small challenge on its own), ask yourself, “How wrong are you?” as it relates to the product ideas you are chasing. Or examine the flip side, “How sure are you that you are building the right thing?” What makes you sure, and what makes you unsure, are areas of thinking and learning that confront teams when they’ve worked hard to smooth out delivery. It does not matter if they are using Scrum, Kanabn or NonBanThe Myth of Certainty and the Measures of Realities
Many teams I coach talk about a “definition of done,” one of many emergent ideas from the agile community that has helped people learn to deliver. Work deemed done, in the form of working product in a meaningful environment, improves measures and learning, but sometimes induces a false sense of certainty and a dangerous level of confidence that success is near.
Unfortunately, products are only done when they are in use. Watching users in the wild often teaches teams that what they were certain about. “I am sure people will need to …” is not what people need. It may be that one person’s arrogance, or fear of “not being a good product owner,” is the issue. It could also be the simply fact that the product ideas were right and the market changed the game. When this happens, shedding arrogance and embracing evidence is your best tool for building less of the wrong thing (which allows you to learn fast and spend less).Embracing Wrongness
Product development, which goes far beyond product delivery alone, is an act of being wrong often. Like science, ideas are tools for learning and need to be viewed with less certainty than an automated test. Where people are involved, as opposed to code, automation is more difficult. People are beautifully chaotic and take unexpected journeys into interesting and uncharted territories. Being ready to be wrong is one way to be ready to learn, and product learning something we all need to practice, and practice and practice.
If you have practical experiences to share, please chime in so we can collaboratively learn from being wrong collectively.
Guest post by Ellen Gottesdiener, Founder and President of EBG Consulting
If you’re on a team that’s transitioning to lean/agile, have you experienced troubling truths, baffling barriers, and veritable vexations around planning and analysis? We work with many lean/agile teams, and we’ve noted certain recurring planning and analysis pain points.
Mary Gorman and I shared our top observations in a recent webinar. Our hostess, Maureen McVey, IIBA’s Head of Learning and Development, prompted us to begin by sharing why we wrote the book Discover to Deliver: Agile Product Planning and Analysis and then explaining the essential practices you can learn by reading the book.
As we work with clients—product champions and delivery teams for both IT and commercial products—we strive to learn continually. And that learning is reflected in the book. It tells you how to take actions that will accelerate your delivery of valuable products and will increase your enjoyment in the work.
9 Pain Points to Prevent, Mitigate, or Resolve
Here’s what you need to know, in a nutshell — the 9 pain points we most often see in planning and analysis. (Note: when you read “team,” it means the product partnership: business, customer, and technology stakeholders.)
Inadequate Analysis: Teams start to deliver and then realize they don’t know what not to build. Some teams, making a pendulum swing to agile, abandon analysis, trying so hard to go lightweight that they go “no weight.”
Poor Planning: Teams waste a lot of time in planning and meeting without first having a shared understanding of the product vision and goals or the product needs for the next delivery cycle. Planning might be taking too long, or, on some teams, the product champion and delivery team mistakenly think they have sufficient information to plan and deliver.
Frazzled Product Champion: The product champion (what Scrum calls the Product Owner)—the person who makes decisions about what to deliver and when—is frayed, frustrated, overwhelmed, and overstressed. These people, the keepers of the vision and the holders of political responsibility for the value of the product, often struggle mightily to balance their strategic product-related responsibilities with their tactical ones.
Bulging Backlog: Teams accumulate monster, huge backlogs (baselines) of requirements, often in the form of user stories. Every possible story or option for building the product is weighing down the backlog and squeezing or obscuring the highest-value stories.
Role Silos: The team members are acting according to their formal roles, and not focused on the goal. For example, someone always writes the stories, someone else does the testing, and someone else develops. They don’t have a shared way to communicate or a shared understanding of the product needs.
Blocked Team: Teams. Just. Get. Stuck. Waiting. On hold. It even happens to teams using high-end agile project management tools, which are supposed to help them stay organized and efficient. Some of these teams are overwhelmed by the plethora of requirements (see “Bulging Backlog”). Or they have unclear decision rules or don’t know how to define, quickly analyze, and act on value-based decisions. We’ve also observed teams with too few “fresh,” well-defined requirements, ready to pull into delivery.
Erroneous Estimates: Estimates are way off (dare I remind you, most of us underestimate our work). We’ve observed teams that, even after three or four iterations, can’t stabilize their cycle time or speed. Often, they lack clarity about complex business rules and data details, or about the product’s quality attributes (such as usability or performance). That often contributes to our next observation.
Traveling Stories: Traveling stories (no, not traveling pants) are ones planned for a given iteration or release but end up being pushed to a later date. (As you may know, a story is a product need expressed as a user goal. Many agile teams use them, following the canonical format: “As a…I need to…so that…”) Occasionally stories travel due to unexpected technical issues. More often it’s because the stories are “too big” to be completed in a given release. Or at the last minute the team discovers they need an interface. Or they find expected business rules for an unexplored regulation. Or data dependencies pop up. Teams are not thin-slicing their stories based on value, and so they’re unable to finish.
Oops: Teams find unpleasant surprises during demonstrations and reviews, or weeks (or months) after delivery. Or worse, they aren’t delivering the right thing, the right value.
Context-Conversation-Collaboration: Pain Relief
You may have heard of card-conversation-confirmation, originated by Ron Jeffries and his coauthors. These “3Cs” explain the critical aspects of user stories, a part of the planning cycle.
Borrowing from Ron, we’ve found 3Cs of our own: agile product planning and analysis means attending to context, conversation, and collaboration. And these practices relieve the 9 pain points we’ve outlined.
Watch the Video
Hear more about our observations of development teams, learn about the underlying principles that we’ve seen work in all kinds of teams, and see how Mary and I integrated them into Discover to Deliver. The link to the video is here. Let us know what you think.
Troubling Truths, Baffling Barriers, and Veritable Vexations. What are your pain points around agile planning and analysis? Share them with us in the comments section below.
OTOH I am told (have experienced, and actually enjoy the fact that) "in Switzerland, we don't work on the weekends."
So here is an experiment and you can influence whether it succeeds: This June 14 and 15, I will hold a Weekend CSM course in Zurich, AFAIK the first. That's a Saturday and Sunday. We'll start earlier than usual, so you can finish in time to enjoy Saturday evening activities.
What do you think of the idea? I'd love to hear your comments!
Do you know someone, who can't take a Scrum course during the week? Have them check out "Weekend CSM"
Assembla became known as the world's best Subversion host, but during the past four years we have added great git workflows, and we rely on them for all of our own development. This article answers some questions that we got recently about where to use Assembla git, instead of Github or Gerrit or Stash. Please contact us if you need help getting started.
WHY USE ASSEMBLA?
Assembla is designed to build the “team” in distributed team. Assembla gives you a team-oriented way to use git. You can put all of the branches and repositories in one workspace, with shared team permissions, one activity stream, and full visibility for new and existing team members.
Assembla is great if you create a lot of new projects, apps, and sites. The "space" container increases maintainability by putting everything for a project in one place - code, documentation, tickets, history, team, FTP or SSH deploy, and extra tabs that can contain build and monitoring tools. You get started quickly, you can always come back for enhancements, you never lose anything, and you can hand off to new team members and contractors, or make an extremely complete and professional delivery to clients.
FEATURES and WORKFLOWS
Assembla links code commits and merge requests to a ticket (or issue). You can see a complete history of the changes that affect a ticket.
You can ALSO see a list if the tickets that were affected by a merge request or a list of merges. This gives you automated release notes for a continuous delivery process.
Assembla implements the complete Github-style individual coding workflow. Developers can make an individual branch or repository. They can submit pull requests (called merge requests). They can review, comment, accept, or reject merges. They can use @mention and @mention! to call for reviews or post comments for specific team members.
Assembla also implements a Gerrit-style workflow which is often better for big teams or full-time teams. In the Gerrit-style workflow, team members start from a shared master branch, and they make temporary branches to review specific changes. This unifies the team and removes the work that team members have to do to maintain their own repositories. Assembla has automated this workflow to simplify the Gerrit or “branch per ticket” flow. The master branch can be marked as “protected”. When team members push changes to the master branch, Assembla automatically creates a temporary branch to test and review the change.
Assembla has a full API and Jenkins integration so that you can run continuous integration on review branches, merge requests, and master branches. Jenkins can even be added as an Assembla tab.
Okay… let’s set a little context here. In my last post we talked about two different types of projects. The ones that are knowable and the ones that aren’t knowable. Projects where it makes sense to estimate and projects that are more like R&D investments where we are spending money to learn and discover. Today, I want to talk more about the first kind. The ones where we do have some idea of what we are building and the technical challenges that might be involved.
Getting clarity on what we are going to build and how we are going to build it isn’t easy. This is especially true when we have multiple competing stakeholders, no clear way to resolve priority conflicts, more to do than we can possibly get done, and the technology while understandable, certainly isn’t trivial. In the face of this kind of ambiguity, I think that many of us have thrown up our hands and concluded that software isn’t estimateable and everything should be treated like R&D.
I think this is a mistake. If we keep pushing to treat everything like R&D, without understanding the delivery context we’re working within, the whole agile movement risks loosing credibility with our executives. If you remember from our earlier conversations, most companies have predictive-convergent business models. We may want them to be adaptive-emergent, but they aren’t there yet. We’ll can talk about how to move these folks, but for now we have to figure out how to commit.
Back in the day when I was just learning about agile, and immersed in the early thinking of Kent Beck, Alistair Cockburn, Mary Poppendieck, and Ken Schwaber… I came across this idea that the team had to commit to a goal or an increment of the product at the end of the iteration. There were some preconditions of course, the story had to be clear and understandable, we needed to have access to an onsite customer, and the team had to have everything it needed to deliver.
If the story wasn’t clear and understandable, there was this idea of a spike. I’ve always understood a spike to be some snippet of product we’d build to go learn something about what we wanted to do or how we were going to do it. This idea has expanded some over the past few years to include any work we bring into the sprint to do discovery or learn something we didn’t know before. It’s basically an investment the product owner makes to get clarity into the backlog.
Like most of you guys I’m sure, my planning approach was heavily influenced by Mike Cohn’s Agile Estimating and Planning book. Mike turned me on to Bill Wake’s INVEST model and I’ve used that as a tool for understanding and teaching good user stories ever since. The longer I use the INVEST model the more profound I think it is, but as widely accepted as this idea seems to be, I think there are a few under-appreciated letters in the model. The one most relevant to this discussion is the ‘E’.E is for Estimateable
The idea behind INVEST is that these six attributes of a user story set the preconditions the Product Owner must meet before the story can be brought to the team. If they are not INVEST ready, they get deferred and we schedule a spike. The idea behind the ‘E’ specifically is that the user story must be well enough understood by the team that they know how to build it. The team has to be able to break down the user story enough to put detailed estimates on the tasks and be willing to commit.
Remember when I said that almost every disfunction on an agile team tracks back to the backlog? We’ll here is the problem… most teams are accepting user stories into sprints that are not INVEST ready and are certainly not estimateable. If the team doesn’t have enough information to make a commitment they shouldn’t make a commitment. So… do we conclude from this that software isn’t estimateable or do we conclude that we collectively do a poor job backlog grooming.Backlog Grooming
Because most of us work in dysfunctional organizations that don’t manage a roadmap, don’t stick to their vision, thrash around at the executive level, and generally can’t make up their mind… many of us have come to the conclusion that creating a backlog is waste. Why spend all that time writing up user stories when things are going to constantly change? This is indeed a problem, but giving up on planning is not the answer. Making up your backlog as you go isn’t the answer either.
When we make up the backlog as we go, everything the team encounters is going to be new to them. Everything they do is going to require a spike. The whole notion of Sprint Planning and Release Planning is predicated on the notion that we generally know the size of the backlog, we learn the velocity of the team, and based on those two variables we can begin to predict when we’ll be able to get done. If everything requires a spike, the backlog is unstable and indeterminate.
For every user story we attempt to build, we create at least one spike, and probably two to three more user stories. Even if we have a stable velocity, the scope of the release is increasing faster than we can burn down the backlog. We never end up with stable view of what’s happening on the project. The team feels like it’s thrashing, the product owner feels like their thrashing, and the organization gets frustrated because the product isn’t getting out the door.Dealing with Backlog Uncertainty
To solve this problem… to get better at making and meeting commitments… even at just the sprint level… we have to plan a sprint or two ahead of the increment we intend to commit. That means if we want to get better at nailing sprint commitments, we need to have backlog properly groomed BEFORE the sprint planning meeting. If there is spike work to be done, that needs to be identified and done BEFORE the sprint where the user story is going to be built.
The implication here is that the team (in any given sprint) is going to have some capacity allocated toward the work of the sprint and some capacity reserved for preparing for the upcoming sprints. This could be as simple as having a meeting or two every sprint to help the PO groom the backlog, do look ahead on the backlog, to ask questions and to give guidance. Sometimes it’s more ad-hoc and sometimes it’s more formal. Some teams track this work, some just allow it to lower velocity.
Either way, allowing some slice of capacity for preparing for the upcoming sprints is an essential element to begin stabilizing delivery, agree? If you are with me so far… I’d suggest that stabilizing a release follows the same rules we just applied for stabilizing sprints. Don’t commit to anything in a release that the team doesn’t understand or know how to build. If there is a ton of risk and uncertainty coming out of release planning, Scrum isn’t going to necessarily help us delivery reliably against the release objectives.
So how do we get this level of clarity on the release level backlog? If we apply the same concept at the release level we applied at the sprint level, we’d have to suggest that the team has to do spike work BEFORE they get to the release planning event. That means that we need to have some idea of what’s in the upcoming release, and the unknowns associated with that release, BEFORE the current release is even done. If we are doing 3 month releases, I’m looking for high-level planning somewhere between 3 to 6 months out.That’s NOT Agile!?
And here is what I’ll inevitably get when I make this point with some folks. Hey Mike… that’s not agile!? That sounds like Waterfall. You’re suggesting that I have to plan 3 to 6 months out, at the detailed user story level, in order to stabilize delivery and make and meet commitments at the release level? Just to be clear, that is EXACTLY what I am saying. And that begs the question… what is agile? What exactly does it mean to be agile at the corporate level.
In the context of a predictive-convergent company, one that is doing projects that are not R&D, where it is reasonable to understand requirements, and the technology is not totally unknowable… yes, it is perfectly reasonable and advisable to start looking at your backlog 3 to 6 months in advance. Maybe not at the finest level of granularity, but we need enough understanding of what we are going to build, and how we are going to build it, to sufficiently groom the backlog and mitigate risk.
Corporate agility comes not from the practices of Scrum, or from making your backlog up as you go, but from creating the ability to change your mind as you learn new information. If I plan ahead, sure I may create some waste and incur some carrying cost due to a longer backlog, but because I am building complete features in short time-boxes, and working toward potentially shippable code every two weeks, I make it easier to change direction. Some level of forward planning is the cost of making and meeting commitments.
I’d go so far as to say that in a larger organization, changing your mind every two weeks is the equivalent of thrashing.
Most companies need to be able to change their mind every quarter, some only every six months. Most do not need the ability to change their mind every day. In this case, if we can stabilize the roadmap, get some basic governance, apply some Lean/Kanban based program and portfolio management, and disciplined release management… you can get a stable well groomed risk adjusted backlog. And it is possible to estimate and make and meet commitments.
We do it with our clients all the time.Building My House With Agile?
And this is why I told you the story about building my house. I think the metaphor holds well in companies that are trying to deliver in the predictive-convergent problem space.
1. We routinely recommend a 12-18 month roadmap level plan. Initiatives are broken into 2-3 month increments that can ideally be delivered within a single release. The roadmap isn’t just about business goals, it also has architectural guidance and maybe even high level UX. It’s enough to understand what we know and what we don’t know and provide budgetary estimates that should be close enough to reality such that they establish reasonable constraints. This is analogous to the time I spent with the builder coming up with the architectural design, feature list, and budgetary estimates for my home.
2. We routinely recommend a 3-6 month feature level plan. This helps us get clarity around what we can actually build in the current release and what’s coming in the next release so we can start grooming the backlog and mitigating risk. For me, there is no hard rule for a feature level breakdown. Of course I’d like the features to be smaller than the roadmap level initiatives, but they are going to be much bigger than user stories. I like to allow them to span sprints so somewhere around 2-4 weeks seems right to me. Having this view is analogous to a construction schedule that lays out the foundations, walls, roof, landscaping etc and puts the key dates on a calendar. We still haven’t made all the fine grained decisions, but the project is starting to come into focus.
3. We routinely recommend a 3 month rolling backlog of fine grained user stories. I’m not saying that the user stories have to be 100% sprint ready, but they should be risk mitigated and pretty darn small. Definitely smaller than a sprint and confined to a single team. They should have a clear definition of done and some acceptance criteria, but as we learn, we may split them up more hopefully finding ways to leave stuff out and focus on just the minimally marketable part of the requirement. This is analogous in my house building example to picking carpet color, paint color, the exact kind of hardwood and tile, and the placement of the bushes in the front yard.Context, Context, Context
If we are building stuff that is unknowable… sure let’s not plan or estimate or commit to anything. But let’s also be clear with our stakeholders what they are investing in. They are putting significant dollars at risk hoping to find a solution to a problem that may or may not exist. That is a perfectly valid way to spend money and allocate investment as long as we are all in agreement that is what we are doing. If the stakeholders think they’ve got a guarantee, that is a problem.
If we are building stuff that is knowable… we need to have a plan for road-mapping the product, progressively breaking things down into smaller and smaller pieces, mitigating risk, planning forward, establish velocity, collaborating to converge on desired outcomes, making tradeoffs, communicating progress and reporting status. It’s not that things will never change, and maybe even our high level planning is off, or maybe we see a risk we didn’t anticipate, but at least we have a baseline to communicate how what we learned may impact the project.
If we are building stuff that is knowable… but we don’t have the organizational structure, governance, metrics, discipline, prioritization, or whatever… and it just FEELS like the work is unknowable… putting these projects in the unknowable category diminishes our credibility. Executives know this stuff is knowable. You’d be better off calling out the stuff that is broken in the organization and working with the organization to fix it. Scrum calls these organization impediments. Remove them.Conclusion
In my opinion, regardless if you are a consultant or an internal employee, people appreciate a thoughtful consideration of their particular context and a willingness to adapt your point of a view to help them solve the problems they are really trying to solve. We started this thread with the notion that many in the agile community are solving the wrong problem. Even if it’s the right problem, it’s not the one that executives are trying to solve.
There is so much goodness coming out of the agile community right now. So much forward thinking and so many new ideas. Unfortunately we also see a ton of dogmatism and a profound lack of understanding about the real problems executives are trying to solve. I think we have had, and continue to have, the opportunity as a community to make a profound impact on the way companies operate and in the lives of the people working in them.
The early adopter ship has long since sailed. We are probably on the tail end of the early majority. That leaves us with the late majority and laggards just coming to the party. Maybe that’s what’s driving some of my consternation here. Maybe it’s time we start figuring out how to talk to the folks coming late to the party. Maybe it’s time to focus on how to help these folks adopt agile. For these folks it’s less about defining the end state and more about learning how to get there.
NOTE: This series has been an interesting exploration for me. We have been evolving our transformation model over the past year or so, and writing about this stuff has given me a level of clarity and understanding I didn’t have before. I’m actually talking about stuff differently than I was even a few weeks ago. I made some connections that I hand’t made previously.
This post ended up being a good setup for going back and reevaluating some of my first few posts and putting them in a slightly different context. I think what I’ll do is recap some of the earlier stuff in the new context and then build on the house stuff, planning and risk stuff, to start talking about governance and road mapping. The next few weeks are going to be a little crazy, so wish me luck actually finding time to write. Thanks for reading.
Check out the previous post: Understanding Risk in Your Project Portfolio
I continue exploring deep reasons behind various phenomena in agile software development. It’s amazing, how often the universal principle of entropy manifests itself in the relatively short ~ 10-year mainstream history of this movement. It looks like I have another eye-opener.Entropy in software development
What is entropy, in simple terms? This is the principle of bouncebacks as in a pendulum. If there’s too much emphasis on one side, a social or ideological pendulum bounces to the opposite point, as if to compensate this overdoing. We have too many decisions to make and we’re tired of that? Let’s invent the panacea, called Big Data. We got tired of deadlines in waterfall software development, and even Scrum didn’t help, because we got tired of those time-boxed iterations? All hail Kanban now, because it has no deadlines. We are tired of managers treating us as numb tools? We want to break free, so we delve into the tempting freedom of self-organization and no managers at all. I sketched the visual for this pendulum, to make the metaphor clearer.
Now, the universal law of bouncebacks is not a problem per se. It’s a given. We can’t get away from seasons changing; and they say that if you laugh too much, you will soon cry. Anything in this world has a bounceback, if overdone. What all those bouncebacks have to do with problems of my company, you might ask? That’s what I’m getting at. There’s a composite of forces that start a trend or a movement. If the critical mass of companies are having similar problems, then the pendulum gains the powerful momentum to the opposite side. That’s how it happened with agile. Some companies hopped on this vessel as early adopters, some have not yet accumulated enough energies for that move, some are in the process of swooshing with the pendulum swing. Each and every company has their own unique place in this swing, the unique combination of energies and driving forces. The way I see it, the art of technical leadership has nothing to do with catching the swing and rushing headlong where everyone else is rushing. To follow the “join everyone else” call is the easiest thing to do, and a certain component of luck might help here as well, if a company has caught the early tide flow, and benefited from the swoosh. The hardest thing is to sense where your organization is now, in which particular point of the swing, and how standing there influences your org’s productivity and ability to deliver software to your customers.
I’ve spent enough years in software development, with customers, developers and managers, and I speak from personal experience. It feels good swinging along with the pendulum as it bounces from the left to the right, as on the picture. I felt that the agile movement and self-organization in a team is what I really needed at some point of time. However, later on I started connecting the dots in the bigger picture and saw some signs that self-organization and several other values declared in the agile manifesto are not universal laws. They rather signify the points of bounceback. There’s no universal proof that self-organization, or face-to-face communication yield the highest productivity for any company. Keep in mind that agile manifesto signatories are software craftsmen. They’ve postulated the agile principles to declare that software development is a craft, and they are a noble guild that wants to serve customers. By declaring those principles, they mentioned nothing about constraints, times-to-market, expectations of stakeholders, and financial goals.Is it a human instinct to deliver projects on time?
Some complexity theory studies compare the laws by which organizations function with those of colonies of ants or flocks of birds (e.g. check this article dated 2009). According to those theories, ants and birds know instinctively how to self-organize to fetch food, or to fly safely to their destination, and the researchers somehow concluded that IT companies must follow the same pattern. I’ve shared this belief until recently, when a realization struck me. I beg your pardon, but is it our natural human instinct to deliver a release on time? Or to a meet a harsh deadline? Can you imagine the Great Wall of China built by a self-organized team? Of course, no. Why am I asking this? Well, software developers are not ants and birds, and, speaking of instincts, their basic high-profile instinct as software craftsmen is to craft what they do to the point of perfection. If given endless time, they would craft it at their own pace, be it a UX design, or a piece of code, and that’s where they indeed are able to self-organize and run a flat organization as e. g. Valve or 37 Signals do. Lucky is a software craftsman, and lucky is a team that can afford the luxury to release at their own pace, with this deep feeling inside that they have crafted this piece to perfection. However, we live in the world that is full of constraints. A pizza-sized team of craftsmen wants to grow big, because they now want to come up with yet another awesome software. This move requires more hands, and coordination with stakeholders, internal and external ones. The noble castle of software craftsmanship is invaded by merchants, who want deadlines met and sales goals hit.
So, when the harsh reality of the market hits the noble craftsmen, they follow inertia and think that self-organization (or flat structure) will let them achieve those very non-instinctive deadlines and sales goals. Looks like that’s what happened to the guys from Everpix who did not have practical guts to comprehend that they have to take merchants into account, and kept having their heads in the clouds. If they were smart enough to sense in due time that the pendulum wants them to swing backwards, down-to-earth, to pragmatism and discipline, Everpix might have still been alive delivering some nice service to people. That didn’t happen, however.
It’s time now to take a pragmatic look at the eulogies they sing to flat organizations and self-organized teams. The opposition of hierarchy and flat goes first in the list of pendulum bouncebacks (see the sketch above). As for the other three pairs, I’ve covered them elsewhere. The flat company structure is related to the concept of laissez-faire leadership. While this is an excellent approach to leading a company from the standpoint of individual learning and skills acquisition of team members, it does have some faults if what a company needs most of all at this very moment is maximum productivity. Quoting the article that I linked to above: “Researchers have found that this is generally the leadership style that leads to the lowest productivity among group members.” Again, this leadership is a bounceback from rigid hierarchy. It does allow people to take decisions and contribute to the activities that are beyond their direct responsibilities, but people have only so much energy, and it strongly depends on individual working styles. It could be that someone who has a lot to contribute to good decisions is not OK with the combined need to make decisions and to produce deliverables. Organizing and decision-making is work, the chore that drains energy from the performance component of such an individual. If a UX designer or a senior developer who is at the same time responsible for the implementation of a feature or a design spends too much time in decision-making activities unrelated to their main work, this is a serious hindrance to productivity. Oh, and don’t forget that personal discipline and responsibility is a must for that style, as well as high cross-competence in all aspects of software development. What happens then if team members lack competence, and what they do is learn? The momentum spent on learning is subtracted from production. Here’s a visual to help explain the idea quickly:
The background colored shapes show how each of the components would take from the other two, if overdone.Balance learning, time and productivity?
There is a way, though, to get the best of productivity and keep the learning in place. But it does require smart thinking behind the daily setup of company operations. Back to the pendulum, the tech leader has to keep the company somewhere around the pivot point, with no rush to extremes. Someone has yet to develop the balanced mix of agile and hierarchical management. This mode of pivotal operation implies razor-sharp intensity and focus in anything and everything, the foresight in knowing and defining which issues need whose attention, and when, and setting the boundaries as far as to the point of worshiping individual productive flows, that of decision-makers and that of performers. It takes some hard thinking, and taking hard choices: can we afford hitting 5 meetings with 7 people to discuss this issue? Is it worth it? Does it really matter for productivity that everyone reconciles and has the same viewpoint on any given problem? How the skills and competence of people can be applied optimally, to keep them working and have them learn new things? As I read stories about companies that seem to be successful with flat structure, I noted their discipline and foresight with work-related interactions. So, if tech leaders want to copy-paste the laissez-faire style because everyone says that it’s cool, they need to be realistic in the assessment of where their company is in the pendulum, and which priorities overrule at this very moment. Hands-off leadership, which is the same as self-organization, is a luxury that not everyone can afford.
I hope to share more thoughts on the modus operandi of tech leadership in the future.
The Periodic Table of Visualization Methods (aside: why people abuse the notion of the Periodic table of elements is puzzling to me - it's not just a table with iconic status - it's a predictive model in action; ... meanwhile, back to the visualization methods...) awesome chart with wonderful pop-ups to explain all the visualization methods.
Periodic Table of Visualization Methods
So let's just go right ahead and include the predictive model of Mendeleev's table.
Example of the Periodic Table style applied to Scrum by my friend and colleague KaTe.
by Kate Terlecka
When you're into visual info, books on work processes, wonder and humor... you need a crateful of grateful - bet ya can get it at www.getlit.me here's a sample of Todd's book review info graphics.
Succeeding with Agile Governance
Want to visualize what that conference call really looks like - watch this video.
from Tripp and Tyler
Visualize the 5 SOLID class design principles.
Visualize what an org. chart should look like (if it was to be a useful tool; rather than an ego booster):
A while back I was invited to the AITP Atlanta Chapter for a CIO roundtable discussion that involved questions on agile. The event was a great success and I came away with a bunch of great insights into what topics are on the minds of today’s CIO. One statement that night has really stuck with me. A wise, retired CIO told me, “Don’t sell me your solution, solve my problem.”
That statement further solidified my belief that I am not “implementing agile” (hang with me), but rather I am solving a problem or a set of problems that commonly occur in enterprise environments.
We don’t sell vials of snake oil. Here’s why that may be the perception.
Let’s consider the state of affairs for a moment. When I get the opportunity to have a discussion with a new organization, the person I am talking to needs me to solve a problem. They might not know anything about agile, scrum, kanban, or any other process. Alternatively, they may have experienced a poor implementation and have an immediate bias to any of the “agile” words (i.e. sprints, daily scrums).
If I am selling them something, I genuinely want to solve the problem, not implement agile. That allows me to be a pragmatic partner with knowledge of agile systems that can benefit my customer. It breaks down zealotry and keeps me honest.
In the end, I am directly and intentionally not talking about agile, scrum, kanban, or lean or anything else that is under the agile umbrella. I simply want to know, what is the most impactful issue that their enterprise is facing right now in their unique context. Can I solve it? No, but we (myself and the enterprise) can. It must to be a partnership though to be an effective sustained transformation.
Here’s the catch. Most, but not all, issues will fall into one of several categories.
- Time To ROI
Though categorically these problems are recurring across many business enterprises, the underlying causes can be a complex, interwoven gobbledygook of methods, procedures, and people.
That’s why it’s important to listen and offer up expertise when you understand the problems. It’s an engaged dialog that I am targeting to create a shared understanding of their problem so we can work to create a vision of our solution. I’m not there just to lend an ear. That’s why I need all this experience stuff.
Speaking of experience, I have found commonality among the solutions for each of the categories listed above.
Let’s take Predictability. In order to become predictable, most organizations will need to learn how to systematically decrease batch size, reduce WIP at the enterprise and team levels, and stabilize teams. Inherently, this will increase quality and decrease Time To ROI. To further improve those, I will need to run experiments on their unique context.
How do they begin to get predictable? That’s what they need help doing. It’s what scaling agility is really all about. Getting more value out of the system to decrease time to ROI and predictably make and meet corporate goals. Informed, predictable returns on investment.
At my core, I believe a predictable system is one that we can run experiments on and get most other problems solved. That’s my key to unlocking the shared potential of both parties.
Fail Better - Dublin Science Gallery
"The goal of FAIL BETTER is to open up a public conversation about failure, particularly the instructive role of failure, as it relates to very different areas of human endeavour. Rather than simply celebrating failure, which can come at great human, environmental and economic cost, we want to open up a debate on the role of failure in stimulating creativity: in learning, in science, engineering and design."
So scientific processes have a little trick up their sleeves called the Null Hypothesis. The null hypothesis, or default answer, is generally assumed true until evidence indicates otherwise. How often do you use this process to mutate your software development process? How do you protect yourself from the confirmation bias during your process improvement experiments? Do you see this null hypothesis at work in the TDD process of proving a unit test fails before the implementation code creates evidence to indicate otherwise?
Since this scientific process is not very natural for us humans; it leaves me to wonder how we learned this process. One common answer is Bacon! Francis Bacon to be precise - Voltaire called Bacon the father of the experimental method.
Back when I was a Director of many things at one company, we had an urgent patch to go to a customer. My VP wanted it “yesterday.” Well, time only goes in one direction.
I gathered my continuing engineering team, explained the pickle we were in. “Everyone wants this patch right away. However the customer is truly pissed. I want to know that we have a fix that works. And, while you are working on it, I will need to know updates every morning and every afternoon. I will run interference for you, as well as I can.”
Everyone groaned. They knew what this meant. We had a small company. The corporate management was just down the hall from our offices. Even though I said I would run interference, nothing would prevent the VP of Engineering, the CEO, or the CTO from popping their heads in “to see what’s going on.” Everyone wanted to make the customer happy, right now.
At the time, I didn’t know about kanban boards. I knew about spreadsheets and email. I had four people working on this fix. I knew what they were all doing. So did they.
They managed themselves. Their offices were close to each other. Every day, about noon or so, they gathered in my office, so I would have the most up-to-date status. It wasn’t quite a standup, because some of the work was what we would now call spikes. (At first, we had no idea what was causing the problem.)
As we identified the problem, I explained to management on behalf of the team how they narrowed down the problem and identified it. Then I explained to management on behalf of the team how they were debugging the problem. Then I explained to management on behalf of the team how they were testing the fixes they proposed. Then I explained to management how they were packaging the fix they had decided on.
If we’d had a visual board, this might have been easier. I used email. It took close to a month. It was a very difficult fix.
Notice what I did:
- I explained to the team the results I wanted: as quickly as possible, but it had to be right. Right trumped shoddy.
- I explained that I needed information, and how often I needed it.
- I ran interference and kept the rest of the management team informed, daily. My goal was no surprises.
- I explained things on behalf of the team, so they got the credit. I was doing my management job, not technical work.
Because our management, and I could share the interim results with the customer, the customer was not happy during this month, but they were pleased to know we were working on the fix. By the time they got the patch, they were very pleased. It worked.
I did not micromanage my people. I understood their state. There is a big difference. And that is the topic of this month’s management myth, Management Myth 26: It’s Fine to Micromanage.
If I had stood over their shoulders, and asked, “Is it done yet?” I suspect I would have had different results.
My team understood that I was doing my management job. I didn’t prevent all other senior management interference. But, I prevented most of it. In return, they were free to work together to accomplish their goal: a fix that didn’t upset the rest of the system and really fixed this customer’s problem.
It’s easy to fall into micromanagement. We, as technical people are terrific problem solvers. We excel at it. We want to help other people solve their problems. Micromanagement is inflicting help on other people. It’s not helpful. It’s irritating and prevents other people from doing their jobs.
Have you caught yourself micromanaging? If so, what made you stop?
Finally, I’ve started reading Deming
It took me too long to discover that I deliver more value, not by coding faster, but by focusing on quality. As a young programmer being assigned tasks as an individual I felt pressure to code fast. Just getting something working was an achievement. Untested the work would keep coming back with defects that I would patch up without taking time to get to the root of the problem. A team of testers tested and coders coded, neither improved quality. I was soon spending much more time fixing code than writing new code. It’s hard to imagine people choosing to work this way but it’s a system that is replicated all over the world.
As I grew more experienced and started writing tests that shaped my code, the defects dropped, but we still spent much of our time producing functionality that wasn’t being used. Without taking time to get feedback and discover what was really needed this waste took up much of our time. Everything we created that didn’t meet a need was effectively a defect that needed to be maintained or removed. The cost of doing the wrong thing is even greater than not doing things right.
I It took me so long to discover how insane this was because I was absorbed in a business culture of cutting corners in an attempt to deliver whatever’s being asked for, rather than thinking more deeply about what’s really needed. Alone I never really understood the damage this culture was doing; Deming could have helped.
When we came together and started working as team and took time to reflect, we quickly agreed on the need to improve quality. We started listening to those in the development community telling us how to do this. We spent more time refactoring, removing redundancy and reducing the liability of our technical debt. Working collaboratively, a culture of quality emerged, we supported each other, giving each other permission and encouraging each other to write better code. The team began to care more about the needs of our customer.
Pressure is the enemy of quality. Under pressure we stop thinking and just react. Reactions don’t have the foresight to consider quality. Managers pressurising teams to go faster instead of providing a clear vision, facilitating learning and encouraging a culture of continuous improvement is often the problem. Chase velocity it goes down. Chase quality and increased velocity emerges. Quality pays.
A post or so ago I used the process of designing and building a house as a metaphor for how to plan an agile project. I was basically making the case that we would never spend our own money the way we are asking our stakeholders to spend their money. We would never give a home builder $500K dollars without some sort of commitment around what we were going to get for our time and money. As consumers, we want to know what to expect.
That said, I don’t think it’s reasonable for us as consumers to expect that nothing is ever going to change as we begin building and learn more about the kind of house we really want to build. We can’t go back to the builder and demand changes for free because we didn’t understand exactly what we wanted. We create high level plans, set budgetary estimates, collaborate through out the process to guide the build to successful completion and a satisfactory outcome.Why This is Relevant to Software
Software projects are all over the place in terms of risk and uncertainty. If I am building an e-commerce site from scratch, and we are using known and well understood technologies, that is a pretty straightforward type of project. I might not know everything that I want on day one, but I can understand the major parts and pieces. I can understand the underlying implementation, create a budget, and probably come pretty close to meeting that budget at the end of the project.
This is the type of project where the metaphor of building the house makes sense. Create a high level plan, develop budgetary estimates, establish constraints, and work collaboratively with the customer to guide the solution into a successful outcome. As the consumer, I should be able to show up with my money, spend a minimal amount of time up front, and get started on the project. I should have a pretty good idea of what to expect when the project is done.
Not every project though follows this same kind of rule. Some projects are much more uncertain. Some projects are funded and neither us nor the customer know much of anything about the target product. They might know they have a need, but understanding how to solve for that need is elusive. Some projects are being built for customers that don’t even exist yet. We are creating a market and trying to figure out what we are going to sell as we go.
Some projects are built on top of legacy systems that have a ton of technical debt, poor automated testing, no ability to produce a daily build… let alone do continuous integration. Some products have so many defects and undocumented edge cases that interacting with or changing that software is an exercise in futility. How do you estimate for a project where requirements are unknown and the technology platform is virtually unknowable?Estimating the Un-estimateable
I do believe there are projects out there which defy estimation. Projects where the requirements are truly unknown and maybe even unknowable. Projects where the risk and technical uncertainty are just too great to have any idea what it is going to take to do the project. When you couple this with the fact that people are not fungible resources and don’t burn estimates at necessarily the same rate, trying to predict anything doesn’t seem to make much sense.
It seems to me that we should look at these unknowable kinds of projects differently and apply a different approach to managing them. For the first kind, it seems reasonable to create a high level plan, look into the future to identify and mitigate risk, provide estimates, and plans, and have a pretty good idea of what we are going to get. We can still do agile, we can still inspect and adapt, we can still change requirements in the small while converging on our higher level goals in the large.
The second kind of project can’t be handled the same way. We don’t know the requirements are or what it will actually take to build them. We can’t do high level plan, or look into the future to identify and mitigate risk, estimates are nonsense, planning is an exercise in futility, and (if we are honest with ourselves) have to admit that we really don’t have any idea of what the customer is going to get for their time and money. It’s a bit of a crapshoot.
If we can fundamentally agree that both kinds of projects exist, maybe we need a different way to talk about each of them. Maybe for the ones we can predict, we use adaptive planning techniques… user stories, timeboxes, estimates, roadmaps, rolling wave planning, or whatever… to make sure we are driving up communication and collaboration, delivering early, getting feedback, and tailoring as we go. For the ones we can’t… I think we need to start use the language of investments.Why Do We Call It a Project Portfolio?
If we use the notion of an investment portfolio as a metaphor to talk about a project portfolio… we can start to think about our collection of projects as a series of investments. In a balanced portfolio, I might have some of my money in lower yield, relatively safe, municipal bonds. I might put some of my money in a higher risk mutual fund and hope for a bigger return. I could allocate the remainder to even higher risk, higher reward mutual funds, individual stocks, or maybe even a startup.
The goal of my investment strategy is to preserve some of my capital if things go really bad, while at the same time, putting some my money at greater risk to get a faster payoff. The general principal is that the more risk I assume the more money I should make on the backside. The corollary is that the more risk I take, the greater chance I have of loosing everything. Let’s tie this back to our project discussion.
Projects are investment increments in our Project Portfolio. If I am investing in something that is well understood, in a well understood market, one where we can adequately predict outcomes, chances are I’m not the only one that can see the opportunity or has the capability to build it. I am probably in a more commoditized market and there is a pretty good chance that my long term yield on the investment will be lower. It’s important to control cost because margin isn’t as great.
What if I am investing though in something that is difficult or unknown? Chances are I won’t have the same level of confidence I’ll see a return on my investment. If I am successful, I might be able to change the world and get rich. If I fail, I might run out of money and be unable to feed my family. That is the nature of risk. I think the fundamental problem with projects and estimates is that companies are putting money in very high risk investments expecting a guaranteed return.Managing Portfolio Risk and Return
Personally, I think we should estimate and plan for the stuff that is estimateable. Many, many software products fall into this category and with proper forward planning, risk identification, and a willingness to inspect and adapt and get continuous feedback, we would greatly increase the odds that these projects could deliver a relatively fixed scope within a set of established time and cost constraints. We’ll adapt in the small to deliver in the large… just like my house example.
The projects which can’t be done this way need to use a language of investment and risk. These are not safe investments. We are putting money at risk in hopes of getting significant return. In these projects, we are spending money to learn, and based on what we learn, we adapt and we change, and maybe we even pivot into something entirely different we never even expected. As an investor, I can continue to invest as long as I want, but I never get a guarantee.
The fundamental disconnect is when we start to blur that which is safe and that which is risky and try to use the same language to describe both. We try to use predictive techniques for stuff that needs to be adaptive and we try to converge on outcomes that need to be inherently emergent. Many companies see this and separate development from R&D. Some though are accidentally investing in R&D when they think they are doing safer product development.
When you are doing R&D, an inherently high risk investment, with the expectation of safety and safe returns… that is when we get into trouble. We get ourselves into an irrational place where we are making bets without understanding the risk profile of the bet. You can’t manage your business that way… you can’t manage ANY business that way. My take is that this fundamental disconnect is what is driving the discussion around estimating in software right now.Quick Summary
I get really tired with the never ending debate around should we have projects in product development. I get really tired of the debate over estimating or not estimating. There are fundamentals underneath these debates which are seldom addressed. What is our domain? What is our context? What are the goals of our delivery system? What is the nature of our customer and their needs? What do they have to spend? What do they expect in return?
How I manage my consulting company is often quite different from how I recommend that companies run their product delivery. The world I live in is often quite different from the world my customers live in. We have different constraints and different variables. We have different customers with different needs and sell different products. I like this notion of looking at projects or products as investment vehicles with different risk profiles. Understand your risk and invest accordingly is good advice.
Check out the previous post, Should You Use Agile To Build Your Next Home?
In allen Unternehmen, bei denen ich bereits eine Scrum-Implementierung begleiten durfte, war das Thema Reporting immer ein schwieriges und kontrovers diskutiertes. Meistens sind der Hintergrund die Statusreports aus dem klassischen Projektmanagement mit einer Ampel, Ergebnissen, Aktivitäten, Risiken und Handlungsbedarf – in mehr oder weniger abgewandelter Form. Vielleicht fehlt mir ein bestimmtes Gen, aber ich konnte aus diesen Statusreports noch nie relevante Infos rausziehen.
Was man sich ansieht und was bei den Report-Lesern hängen bleibt, ist die Ampel. In der Regel steht sie auf Grün – höchstens mal Gelb – niemals jedoch würde man sich die Blöße geben, dass man auf Hilfe von außen angewiesen ist. Die Personen, die diesen Report erstellen müssen, halten ihn in der Regel für überflüssig und nicht aussagekräftig. Schreibt man etwas Unerwünschtes hinein, muss man sich mit Nachfragen rumquälen, die jedoch nur davon abhalten, das Problem zu lösen – denn Unterstützung und Hilfe sind eher ein Ausnahmefall.
Und dann kommt plötzlich Scrum. Die POs müssen erst die neuen Artefakte kennen- und nutzen lernen. Das Management und sonstige Reporting-Anforderer sind aber in der Regel etwas schneller als dieser Gewöhnungsprozess. Sie hatten ja vorher einen Status-Report und auf einmal haben sie nichts – oder vielleicht ein paar Zettel. Also fängt man an, sich Gedanken zu machen: Wie könnte ein Reporting in Scrum aussehen?
- Kann man eine Ampel in das Burndownchart einbauen (denn auf die Ampel verzichten wollen wir eigentlich nicht!)?
- Wie können wir abbilden, was gerade im Team los ist (z.B. ein Entwickler fällt für 3 Monate aus)?
- Wie können wir uns im Nachhinein rechtfertigen, wenn in einem halben Jahr doch etwas schief geht?
Oft vergisst man dabei, dass das Reporting in Scrum eigentlich “Built in” ist! Pflegt der Product Owner die ureigenen Artefakte wie den Releaseplan kontinuierlich und wird ein Burndownchart für das aktuelle Release geführt, sollten alle relevanten Informationen vorliegen und sogar aussagefähiger sein als jede Prosa. Diese Artefakte zeigen genau, was tatsächlich bereits geliefert wurde und was in Zukunft geplant ist. Eine Erklärung, warum das so ist, ist natürlich in keinem der beiden Reporting-Vorgehen ausgeschlossen.
Was muss der Kunde oder das Management über diese Form des Reportings wissen? Wie ein PO auf einem meiner Projekte so schön anmerkte: “Es wird viel zu oft in vorauseilendem Gehorsam etwas erstellt, das für den Kunden vielleicht zu viel, zu wenig oder in anderer Art und Weise unzureichend ist. Wir müssen erstmal verstehen, was der tatsächliche Bedarf ist!” Daher:
- Versucht so wenig wie möglich ausschließlich auf schriftliches Reporting zu setzen. Nehmt die Artefakte und geht damit zu den Stakeholdern!
- Erfragt genau, was die Stakeholder an Information benötigen. Kommt ihnen ggf. entgegen und versucht einen Weg zu finden, wie es möglichst wenig Aufwand für euch bedeutet. Betrachtet die Anforderungen jedoch kritisch – warum wird danach gefragt? (mangelndes Vertrauen, Reporting nach oben, Vergleichbarkeit zu anderen Reports, die der Stakeholder bekommt etc.). Macht es wie mit eurem Produkt: Versteht den Kunden bzw. in diesem Fall den User des Reportings.
- Bezieht euer Team in die Pflege der Artefakte mit ein. Macht das Reporting zum Reporting des Teams, nicht zum Reporting für die Stakeholder. Eine Kopplung, nicht eine Entkopplung muss stattfinden.
Welche Erfahrungen habt ihr gemacht? Was braucht euer Kunde? Wie integriert ihr das in den Prozess bzw. nutzt es gleichzeitig als Reporting für das Team?
First, let me thank you for reading this.You may have forgotten about me. Let me re-introduce myself in case you have forgotten (smile).My name is Mike Vizdos and I am the creator of the site at www.implementingscrum.com. And. It’s been pretty “dark” here since August of 2102. That’s over a year. Yikes.
The original comic strip went up in 2006 and Tony (the illustrator) is no longer on this project (he is doing AWESOME things today!).That is the “wow” and I need to offer you my sincerest apologies for leaping into a TARDIS for a while.Real life took over. My family. Priority One. This continues to be a huge realization for me personally. Clients. Around the world. Many people have been hearing “NO” lately (this is very powerful by the way and helps with my prioritization of life, the universe, and everything). Working with a non-profit (as a volunteer) at a place called Gangplank (www.GangplankRVA.org — in Richmond, Virginia and affiliated with the other Gangplank locations).I have cut WAY back on the travel. I am no longer a Delta Diamond for 2014 (first time since its inception of that program). I don’t do many public workshops anymore (except that I am starting to ramp up again in Richmond, Virginia [my home town in the USA]). I mainly send time with the agile communities in Richmond (Virginia), Chandler (Arizona), and down in Costa Rica. I have been doing a lot of work personally with something called “The Lean Startup” (more on that soon). I’ve been pretty active on twitter (www.twitter.com/mvizdos) and Facebook (www.facebook.com/VizdosEnterprises). My old dog died (Xena — the awesome one I talked about in workshops) and we are recently “Foster Failures” (we just adopted a puppy as dog #2 in the house) to join our very lovable black dog mutt. These are not meant to be excuses. Because excuses suck. I’ve been talking a lot about something I have tagged as: #DELIVER out on Twitter. My current message is SLOW DOWN and DELIVER something. It’s something I am trying to do once again. Even with all the “scaling” stuff being the latest “silver bullet” in the agile world…. my main message is really becoming clear: #DELIVER And. I was swapping e-mails today with an old friend and she pulled a, “WTF Mike — It looks like you just *GAVE UP* on this site. Have I? Seems like it. Huh. Kind of sad. Here’s what’s ALSO been happening… The cartoons are getting linked to daily from places all around the world (at least I think they are still just from earth) — from public places to behind those corporate firewalls. People are subscribing to the blog for updates (and for many of you this is the first you have heard from me ever!). People continue to use the cartoons in presentations all over the place! So. You are here. And. I’d like to re-engage with you and provide valuable FREE content here at www.implementingscrum.com again. You are still here for a reason (or maybe not and you just forgot about me lol). I’ve got some ideas on how to move forward with this site. It’s been up for many many years, there are over a hundred cartoons here, and the content has evolved. It will continue to evolve. Scrum does. Agile does. Life does. You are here for a reason. Can you let me know WHY you are here and what, if anything, I can do to share valuable (and still FREE) content with you? Let’s see where we go. It’s 2014. I’ve toyed with the idea of nuking the site and starting over from scratch. It even still pisses me off that this site looks like total crap on a mobile device — so much for a responsive design at this point. That’s pretty radical. That’s me. Thoughts? I am still around. Let’s see where we go with this little site in the future…. Together! Thank you.- mike vizdos