We expect this release to reduce the system's complexity quite a bit, as we have redesigned the projects-teams selector. Previously, your selection of projects and teams would be globally applied for all views that you opened. This caused some confusion, especially when users with different projects-teams selections collaborated on public views.
To remedy this, we've made the projects-teams selector a part of each view's settings. All users will now see views with a predefined projects-teams selection applied. This predefined selection is set by the view's owner.
You can still look at any view through any projects-teams context that you want by clicking on the selector. When you change the selection for a view, the selector will be highlighted yellow, like this:
This highlighting attracts your attention to the fact that you have changed the projects-teams selection for this view, and so you are looking at a different set of data than other users. There is a handy 'revert' option to quickly sync the view's projects and teams back to its public settings.
You will now be able to batch-add brief comments from the Batch Actions Panel to a group of selected items on a Board view.
- Modified Date as lanes
- Emojis in the Tags lane
- Batch update of checkbox custom fields
- Epics added to the Process Control chart, Cycle Time Distribution and Relations network diagram
- 'Open in new tab' doesn't work properly in Safari Version 10.0 for MacOS
- Login page sometimes did not accept valid emails
Our upcoming release (v.3.10.2) will contain a complete redesign of the projects-teams selector. To reduce complexity, we've made the selector a part of views (rather than a global setting found on the top bar). It will now be clear if a view is displaying data from the default project-teams selection made by the owner of the view, or data from the projects and teams selected by you.
We made these changes to solve two main issues. The first is that data in views would unexpectedly change because of unintended projects-teams selections. This happened when you navigated through views with and without predefined selections by the view owners. It was sometimes unclear why certain projects and teams were displayed in certain views.
The second issue would occur when users collaborated on views. It was often unclear for users that they were seeing different data than their teammates because they had set a different selection of projects and teams. To fix this, we've made the projects-teams selector as straightforward as we can. You will now always know what data you're looking at on the view and why.
By default, a view will show the selection set by the view owner in View Setup.
You can still make a view display data from any projects and teams that you need to see. When you modify the set of projects and teams shown on the view, the selector will be highlighted.
These changes are applied only for you; other users will still see the view with the predefined projects and teams selected by the view's owner.
Once you set projects and teams for a view, your selection will be saved for that view. You can easily revert back to the predefined selection by clicking the revert button.
If the view's owner hasn't set a projects-teams selection for the view, then the revert action will not work because there is nothing to revert to.
For more information, visit the Project and Teams Selector article in our User Guide.
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Rebasing is a topic that comes up all the time when using Git. Many times, rebasing results in having to do a force push, which makes some people wary of rebasing. No need to worry! Once you understand why a force push is necessary, and how to rebase responsibly, you’ll feel more comfortable.
So, what is rebasing, and how do you rebase in Git? Rebasing is simply taking changes that have been made somewhere else, incorporating them into your branch, and then replaying all of your changes on top of that new base.The Golden Rule of Rebasing reads: “Never rebase while you’re on a public branch.”
You’re in fact, changing the commit history, which Doc taught us could have disastrous consequences. (Especially if you run into your future self.)
So, let’s throw on some Huey Lewis, listen to The Power of Love on repeat, and dive into an example featuring Back to the Future. And if you’re even considering using the CLI… Where we’re going, we don’t need the command line!The Timeline
I made a new repo in GitKraken, a cross-platform Git GUI, featuring some main plot points from Back to the Future. First, Marty is sent back to 1985. You’ll see I created a new timeline branch that has a couple of commits representing the events that follow. Then, I pushed the timeline branch up to GitHub.Marty is sent back in time where he encounters some highs and lows.
This timeline branch ultimately leads Marty successfully back to the future, and ends happily with Biff the bully waxing Marty’s dad’s BMW; plus Marty has a brand new Toyota 4×4. Yay! Everyone loves a happy ending!
However, back on master, there’s another change made where the Grays Sports Almanac has been added to the picture. (Uh oh….Biff is going to change the world into a very ugly place…)
If I rebase the timeline on top of master, (by dragging the timeline label on the left, on top of the master label, and selecting rebase timeline into master) GitKraken will replay all of my commits, on top of the last commit on master, in the same order.
As you’re probably aware, this action affects the future, and Biff becomes a philandering, billionaire casino owner and marries Marty’s mom. (Any resemblance to current presidential candidates is purely coincidental, but I digress.)Biff has the opportunity to take over the world… and does.
Back on my remote timeline branch, you’ll see that I still have everything happening in the happy timeline, but locally we have a different commit history.
Therefore, GitKraken is not going to allow a simple push. Git only allows pushing new commits and can’t insert other commits into the Git history, which is why I have to do a force push. The force push literally replaces the remote branch with what we have locally.
Now, this is where some of the trouble comes in. If I replace my remote branch with what I have locally, this has the ability to overwrite other changes, and it’s exactly why a lot of people are reluctant to use rebasing.‘Mom hydrates a pizza!’ is added.
Let’s say that other changes were made to my remote branch that I haven’t pulled in locally. So, on the remote timeline branch, Doc made another commit called ‘Mom Hydrates a Pizza!’, which doesn’t exist on my local timeline branch.
Now if I do a force push, whatever commits are not on the local branch will be lost, and we don’t get the hydrated pizza. In this case, I can cherry pick any new commits into my local timeline, and then do the force push.
I know, I know, this lands us with the dystopian timeline and ruins Marty’s happily ever after…The Golden Rule of Rebasing
The Golden Rule of Rebasing reads: “Never rebase while you’re on a public branch.” This way, no one else will be pushing other changes, and no commits that aren’t in your local repo will exist on the remote branch. So then when you push, there’s no possibility of deleting data.
If my calculations are correct, when this baby hits 88 miles an hour, you’re gonna see some serious Git!!
Here is my slide (yes, it’s just one slide) from my keynote at AgileByExample in Warsaw.
And a couple of photos:
We recently interviewed Kel Koenig, release train engineer, at Dean Health Plan to find out why the organization selected VersionOne and the Scaled Agile Framework® (SAFe®) to accelerate its agile transformation and achieve its business goals. In the video below, Koenig talks about how the … Continue reading →
The post Achieving Business Goals with VersionOne and SAFe: An Interview with Dean Health Plan appeared first on The Agile Management Blog.
A couple of years ago, Mike Cottmeyer wrote a blog post on How to Structure Your Agile Enterprise. He contended at scale we need to organize teams around capabilities. He referenced refactoring legacy architecture into a Service Oriented Architecture (SOA).
We have proven this with many of our clients over the last couple of years. We want to organize around products and their capabilities. A capability is an outcome-based view of what the product does. In other words, products, features, or services can be capabilities. As you design your organization, you can use SOA principles to structure around these capabilities.
According to Thomas Erl’s book, “SOA Principles of Service Design,” there are 8 main SOA principles. Below are ways you can use these principles as you transform your enterprise:Standardized Service Contract
“Services within the same service inventory are in compliance with the same contract design standards.”
Individual parts of an engine each have detailed specifications so they will fit together consistently when assembled. As we design an organization, we want our teams to be highly cohesive and well understood. We want to have a governance model that defines the inputs/outputs for each stage. This ensures consistency and predictability of value flowing through the System of Delivery. Understood definitions of ready/done for Epics, Features, and Stories between Portfolio, Program, and Delivery teams provide the contract for work to flow through the system.Service Loose Coupling
“Service contracts impose low consumer coupling requirements and are themselves decoupled from their surrounding environment.”
Think about an electrical outlet. You can plug in a lamp, radio, television, or even a toaster because the interface is standardized. You wouldn’t connect your television directly to the underlying electrical wires. This is analogous to loose coupling of teams. Teams agree to contracts across different capabilities in an organization to sequence or orchestrate work in parallel. Back-end services and front end UI teams can develop capabilities simultaneously with a standardized and agreed upon interface. When decomposing work, keeping this in mind allows for better efficiency.Service Abstraction
“Service contracts only contain essential information and information about services is limited to what is published in service contracts.”
As we design organizations around capabilities, we want to bring together cross-functional teams of experts in that capability. We want to define the interface for teams to interact and exchange work, but allow each team to decompose and refine their own work. Teams require autonomy to self-organize and determine the best way to accomplish their work. Other parties don’t need to know how they do their work, just that it returns the expected result every time.Service Reusability
“Services contain and express agnostic logic and can be positioned as reusable enterprise resources.”
As we look across the organization, we want to identify areas of reuse. For example, many different products utilize the services layer to localize business logic. Forming teams around such capabilities allows for optimized expertise of the platform and eliminates the need to spread this knowledge across every team. While small team Scrum may advocate full cross functional teams, as you scale in larger organizations, this becomes impossible due to size, complexity, and sheer number of different technologies and domains. As in SOA, we monitor for bottlenecks and can optimize flow based on demand.Service Autonomy
“Services exercise a high level of control over their underlying runtime execution environment.”
Teams need to be stable and have local autonomy to make decisions and do the work requested. We want to decouple systems and environments to allow continuous delivery and break dependencies to allow these teams to be successful. Over time, independent funding of teams is possible, allowing for true agility.Service Statelessness
“Services minimize resource consumption by deferring the management of state information when necessary.”
In SOA, statelessness means that a service doesn’t need to know or care about previous calls. It can do the work it needs to do with the information provided. Applied to organizational design, we want teams to be autonomous and have knowledge of their own work. Build your team structure to eliminate or minimize dependencies. Require well-defined requests as input to the team so they have the clarity required to take it and run. This means no dependencies or reliance on other teams.Service Discoverability
“Services are supplemented with communicative metadata by which they can be effectively discovered and interpreted.”
Defining a clear end state vision for your organization is crucial for reaching organizational agility. It ensures everyone is on the same page and working towards the same goals. Defining the structure, governance, and metrics to measure progress is step one in any transformation. A transparent roadmap and plan ensures teams understand the organizational design and how work needs to flow through the system.Service Composability
“Services are effective composition participants, regardless of the size and complexity of the composition.”
The concept of service composability is taking a large problem and breaking it into smaller, more manageable chunks. In organizational design, this speaks to organizing in vertical structures that progressively decompose business value. Use a multi-tiered governance model to refine work into smaller pieces so the appropriate team can carry out the work (e.g., Epic to Feature to Story to Task). This also allows for adaptability and flexibility as your market or organization changes.
Organizational design is complex and one size definitely does not fit all. It requires working with the client and understanding their unique end state. These ideas are by no means comprehensive, but can help guide towards the path of Organizational Agility.
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
GitKraken is the most luxurious Git GUI for Windows, Mac and Linux! There’s a bunch of different reasons why, but first, let’s talk about data.
GitKraken is used to navigate a lot of data in a lot of different ways. As a dev here at Axosoft, one of my recent goals was to figure out how to get all that data viewable to the user. So, I started work on Rickscroll, a high-performance scrolling utility for React.The Data
There’s the left panel, graph view, diff view, blame view (plus accompanying commit list), and the commit/WIP browser, all displaying a decent-to-gigantic amount of data in a list-like structure.
And, we need these areas of the application to display their data in a performant fashion. Trouble is, the DOM is slow. “How slow is it,” you may ask. Let’s check that out:
There seems to be a linear relationship between the number of rows to render and time. For small, simple lists, this render time is totally acceptable; however, it’s not acceptable performance for any of the aforementioned views that we have in GitKraken.The Almighty Left Panel
In order to demonstrate, let’s walk through a scenario that existed in GitKraken in the left panel prior to v1.7.
GitKraken performs an auto fetch (once a minute with factory settings). When GitKraken performs this auto fetch, the app finds out that a new branch was added to one of the remotes GitKraken has listed in the left panel for a given repository.
Let’s imagine we currently have 300 tags, 10 remotes, 90 branches listed as remote, 10 local branches, some pull requests, and last but not least, a stash.
Well, GitKraken passes that new branch entry down into the left panel. Which means we need to do this expensive DOM calculation on far more than just the single row we want to modify. We really need to re-render quite a bit, as can be seen in the figure above.It’s important that we minimize render times so that the app feels consistently responsive.
The kicker here is that in an auto fetch, we do a series of fetches against every remote listed in your left panel. Every time a branch is discovered, or a reference is updated, the left panel must perform this expensive render.
For these critical areas of the app, we need to be much smarter about how we render our content so that the overall time for a given render is much smaller. We would prefer that each critical section of the app take no more than 10-20ms when responding to a change.
It’s important that we minimize these render times so that the app feels consistently responsive.Enter Scrollable
The GitKraken dev team had been floating around some high-performance scrolling ideas for quite some time. The graph is currently powered by our first foray into the area. The team’s first version of high-performance scrolling, abstracted the scrolling concept into a React component made of three primary divs, the content and scroll bars.
The content div represented the viewable area of the scrollable, and the two other divs were placeholders that ran the height/width of the viewable content.At the time, Scrollable was a huge improvement and allowed GitKraken to scale to where it currently is today.
Inside the scroll bar divs, we placed a div of the height/width of the viewable content + any overflow, such that those scroll bar divs would host a native scrollbar. We then hijacked the scroll events for each div and calculated an offset based on every scroll event.
The problem with this component is it left the child component completely in charge of taking that offset and applying it in a meaningful way.
Our graph was the first component to use the scrollable component and was built with a tiling system to display tiles of content as you scroll. GitKraken shows 150 graph rows per tile; when a user scrolls, GitKraken will render the next tile as it comes into view.
We do the scrolling by rendering every tile div that will exist for the current graph as empty. Each of these divs has a transform property on them that uses translate3d to move or scroll these divs in the content area.
As we move these blank tiles into view, we populate them with content. Our team debounced the scrolling operation because it turns out that rendering 150 graph rows is still quite an expensive operation (causing a boatload of lag during a typical scroll).
Consequently, if you scroll too quickly, the debounce is the reason that you’ll see blank tiles as you scroll in the graph panel.
At the time, Scrollable was a huge improvement and allowed GitKraken to scale to where it currently is today. The downside is that for lack of a standard tiling solution, each component we scrollabalized reimplemented tiling in a way that ‘made sense’ to the content it needed to display.
The diff view, WIP/commit area, graph view, and blame view all ended up building their own tiling solution based on their specific needs. A whole mess of special snowflakes. Scrollable has grown poorly. Working with components wrapped in a Scrollable is very messy.Enter Rickscroll
Rickscroll aims to heal the pain points that we had working with Scrollable. The important points of our various tiling solutions were to separate content into rows and tiles accordingly, use translate3d to simulate scrolling, and try to limit the amount of renderable content to the visible area of the scrollable window.
At this point, we needed to solve the problem of building a clean interface for Rickscroll that addressed all of our application’s needs. Our additional needs were variable height rows, overridable horizontal scrolling, resizable gutters on the left and right, and additional scroll behaviors like locking certain rows to the top of the viewable area as a user scrolls.
It’s also important to note that translate3d is an important part of this component workflow. Elements marked with translate3d in their transform property use hardware acceleration. Abusing the hardware acceleration is another way to minimize paints in the DOM during scrolling operations.
What we came up with is an object structure which represents rows in a list. Each row has its own component class (a React component), props, height, and gutter config.
Rickscroll then takes this list, does a single iteration over it and extracts a mapping of rows to tiles and the corresponding size of those tiles (due to the height property on each row).
Rickscroll is able to map offsets to the start of a tile, and by knowing the size of those tiles, is able to recognize when a tile should be removed from the DOM.In GitKraken v1.8, we’ve shipped the left panel with locking headers and sections in Rickscroll.
When Rickscroll finishes that once over, it uses component state to track the offset of the scrollbars and applies that offset to our tile offset map to figure out which tile we should render at the top of the viewable content area.
We then use the calculated height of the viewable area and the sizes of the tiles to figure out how many tiles we need to be showing for the visible area to stay populated during any scroll operation.
When we have completed a full translation equal to or greater than the top tile’s size, we remove that tile, and we add a new tile to the bottom of the visible area, starting our translation over again from 0. We are able to minimize paints using this flow.
Look at all that scrolling!
As such, we have provided the ability to pass-through offsets to content components in the rows. This will improve the capacity for rows to behave differently from horizontal scroll offsets (such as collecting graph nodes in the gutter).There’s still some room to optimize further, but for this level of performance, GitKraken should be able to fly fast as we roll out improvements to our scrollable renders.
Another feature we were able to put into Rickscroll was the handling of sections or multiple lists per single scrollable. By building an additional object around our lists of rows, we can provide a special header row for each of these lists.
That header row can then be factored into our scroll calculations such that we can build useful context for scroll operations, such as locking headers to the top of the scroll window. We achieve this by both inserting the header row into its appropriate position in the list, and by keeping a separate container which hosts a special locked header row.
When Rickscroll determines that a header row is being scrolled toward that special locked header row, we are able to translate the locked header in sync with the movement of the rest of the rows and to replace the content of that special header once we’ve scrolled the new header into the appropriate position.
In GitKraken v1.8, we’ve shipped the left panel with locking headers and sections in Rickscroll.
Now was it mentioned that we packed all of these nifty features into Rickscroll, and also achieved incredible performance? Check out the graph below.
There’s still some room to optimize further, but for this level of performance, GitKraken should be able to fly fast as we roll out improvements to our scrollable renders.
When a component needs scrollable content as a series of rows, tiling those rows in small, easily renderable chunks and using hardware acceleration via translate3d to scroll is a winning outcome.
With these three tricks, we can build all sorts of API and scrolling niceties with little worry of ruining the performance of the application overall. Further, we’re able to move past the scrolling problem, because we now have a suitable API to leverage whenever we need high-performance scrolling in the application.
No more reimplementing tiles. Instead, we’re able to quickly iterate on how a view should work and function.
And, if all else fails, check out this documentation that should help you with Rickscroll.
Are you measuring the value, risk, and quality flowing through your DevOps pipelines? Here is a value-based approach to measuring DevOps performance that will help your organization better evaluate the effectiveness of its DevOps initiatives. As organizations become increasingly value-stream … Continue reading →
The post Measuring DevOps Performance Using a Value-Based Approach appeared first on The Agile Management Blog.
Are there any Guns N’ Roses fans in the house? Anyone? You over there in the corner—yeah you, what was the name of the amazing guitarist who wore a top hat and even though he wasn’t the lead singer, somehow managed to outshine Axl Rose?
Yes, it’s Slash! Good job!
Why are we talking about Slash? Well, because our left panel keeps improving with every release and for v1.8, it’s all about the slash.
You’ll remember in our last release, v1.7, we made significant performance improvements to the left panel, which made life much better for everyone. Now, we’re introducing folders to the famed left panel.Folder Hierarchy
If you have slashes in your branch, GitKraken will simply build a folder hierarchy for you.
GitKraken will build a folder hierarchy for you in the left panel.
News like this is so glorious, it’s just like hearing that sweet opening riff to Sweet Child O’ Mine. You should probably listen to it now just to remember how good it is. We’ll wait.Filtering
Okay, you’re back! And you’re already familiar with the fuzzy finder: another one of our helpful features that makes it easier and quicker to find things. Now, we’ve simply taken that functionality and applied it to the left panel as well.
You’ll notice that a new search box has been ever so delicately placed at the top of that left panel. So, you can use cmd or ctrl+ shift + F to find what you’re looking for. Suddenly you’re a rockstar…but with code, not a guitar!Filtering has been added to the left panel. It’s So Easy.
No more clicking, scrolling, cursing or wondering where your branches, remotes, pull requests, etc. are. Don’t Breakdown. Now you can simply filter with the search box. Oh, oh, oh, sweet love of mine!
Run on over to the release notes to read more. They won’t let you down like perhaps The Spaghetti Incident? did.
It didn't start out that way. Scrum Day Portugal is a two day event. I arrived Tuesday afternoon, half way into the first day. The speakers were interesting, the talks were great, but we were running late. It felt like a death march project, even though the conference had barely begun.
My job was to facilitate the second day. We had a really tight schedule! Seven igniter talks followed by 2 Pecha Kuchas and 3 ½ hours of Open Space. I realized that staying on schedule would be both challenging and really important. If people are exhausted at the Open Space, they can employ the law of two feet (leave), and all the air goes out of the event. This would be a disaster. How to fix the problem?
Tuesday night, the speakers went out for dinner together. We talked about the problem. A big challenge was that most participants arrived late on Tuesday, and would probably do so again on Wednesday, so we could not just ignore our customers and start on time. Another challenge was that one speaker needed more time than originally planned. Not knowing how late we would have to start, we couldn't decide how to address the scheduling problem. We agreed to make the decision Wednesday morning.
On Wednesday, I invited all the speakers to a daily scrum, shortly before the opening was scheduled. While I tried to make a plan for the start times of each speaker, Chet Hendrickson started writing cards on the table. He made a card for each speaker, the coffee break and lunch.
Visualizing the program à la XP
At this point, I gave up on my “spreadsheet”! Using Chet's cards and the original schedule, we calculated the duration of each session. We agreed to start 15 minutes late, but keep the original timings. So we calculated the new start times for each speaker. What about the speaker, who needs more time? “I can shorten my talk, no problem!” said Manny Gonzales, CEO of the Scrum Alliance (when was the last time you heard a CEO volunteer to shorten their talk?).
What about transition times? There are no transition times, this is the time each of us starts. “Oh, so I have to shorten my talk a bit.” We all understood the problem and the goal. We had implicitly agreed to do our best to make it happen.
“The key word is responsibility,” explained Chet, “Everyone in the team has an obligation to do the right thing. The cards are a tool he uses in Extreme Programming to visualize system architecture, and thanks to the visualization, everyone knew what they had to do.
How did we stay on time? During the each session, I just needed to know who the next speaker was, when their session was scheduled to start. The speakers asked for a friendly wave at five minutes before the end of their session, so they could remain aware of when the had to finish.
In the worst case, a session ended in 1 whole minute late. Some of the speakers over-compensated (shortened), so by lunchtime, we were back on the original schedule!
So the conference ran smoothly and everybody left the conference with a smile. What does this have to do with Scrum and XP?
- Someone was responsible for the process, and raised the questions. In Scrum, that person is called the Scrum Master.
- The team got together to figure out how to achieve the day's goal. In Scrum that's called a Daily Scrum. We left the meeting with a plan and a common goal.
- The Scrum Master remained focused on the process, giving friendly reminders when it was helpful.
- The time-boxing gave us orientation and helped us deliver a great conference.
- Visualizing the problem and giving it to the whole team made solving the problem much easier. (I don't know what Chet calls his board, but it's a great approach.)
Nowadays though, divide et impera is more and more recognised as a highly unadapted approach for the complex world of business. Agile thinking trend heavily agree with this. Nevertheless, we Agile supporters may have a hard time to provide a clear explanation why is this so. A metaphor (oh, thank you, Clean Language!)inspired by Karlene Roberts on HRO organisations help to build some - hopefully helpful - arguments to describe the negative effect of siloted organisations. Specialized silos, when they interact, create a "space in-between" that behave like "interstices" or "holes". Like in a raw material or fabric, these interstices are the main source of fragility. Organisational interstices behavior effectsLoss of knowledge when dividing the work by specialisationThe specialized, eventually externalized, organization has very sharp expertise, but ignores the synergy and the integrated knowledge of the whole operational chain.Conflicting interests The high cost of interstices One of the more or less recognised reasons of creating specialized organisms is cost reduction. Usually the cost of this set-up is considerably higher because of interstices costs. When an interstices between two organizations happens, the need of "transve" communication and coordination rises quickly. Therefore groups of coordinators, teams in charge of transversal communication and facilitators are created. The cost of the coordination can blow-up ceilings.
Enhanced fragility. What does creating a transverse management and coordination group mean? More interstices, with all the related dysfunctions.! An infernal loop is created: interstices tend to self-multiply to fix their dysfunctions, therefore creating extra dysfunctions. Breaking the loop and and shift the associated mental model is the next challengeHow to close an interstices The no punishment policy Inspired by the HRO ( High Reliable Organisatins) model. The practice of "immunity" in case of error is phased on the principle that learning and knowledge is acquired when there is no fear to tell all the facts that create the big picture of a post-incident. Silence is no value for learning, improving and avoiding further incidents....And punishment is the mother of silence.
Dear reader, does this seem an idealistic Disney world framework to you tight now? Then let me give you examples of organisations that apply the "non-punishment policies of HRO : flight companies, Air Force, and more and more hospitals and emergency units. So to say, very unexpected references of La-La Wonderlands.
The non punishment policy leads to high reliability because big accidents can be avoided by open learning from minor incidents (near miss).
We have made some huge changes in our prioritization and planning process this year. In a nutshell, we have switched to open allocation. Here is the story.Old way: boards, feature ranking, top-down approach
During the last several years we used to have a Product Board. This was a committee that focused on annual product plans. It consisted of up to a dozen people with various roles from sales to developers. We discussed our product strategy and set high-level goals (like "increase the market share at the enterprise market"). We created a ranking model that we used to prioritize features and create roadmaps:
It kinda worked, but at some point I understood that somehow we pushed more and more features into Targetprocess, making it even more complex and heavy. Many people inside the company were not happy with this direction and they did not believe in it. Large customers demanded complex features like more flexible teams management, people allocation, an advanced QA area, etc. These are all good features, but we, as a company, somehow lost the feeling of end-user experience. Some simple things like search, navigation, performance, and simplicity were buried under fancy new features. This year, we put an end to that approach.
We want to create a tool that is pleasant to use. A tool that boosts your productivity and is almost invisible. A tool that saves your time. To achieve this goal, we have to go back to the basics. We should fix and polish what we have in Targetprocess already (and we have a lot) and then move forward with care to create new modules and explore new possibilities.
We have disbanded the Product Board, removed feature prioritization, done away with the top-down approach to people/team allocation, and replaced it with a few quite simple rules.New way: Product Owner, Initiatives, and Sources
The Product Owner sets a very high level strategic theme for the next 1-2 years. Our current theme is very simple to grasp:
Basically, we want to do anything that reduces complexity, simplifies basic scenarios like finding information, improves performance and fixes your pains in a product.
It does not mean that we will not add new features. For example, the current email notification mechanism is really outdated, so we are going to replace it and implement in-app notifications. But, most likely, we will not add new major modules into Targetprocess in the near future. Again, we are focusing on existing users and their complaints.Initiatives
Our people have virtually full freedom to start an Initiative that relates to the strategic theme. An Initiative is a project that has start/end dates, a defined scope and a defined team. It can be as short as 2 weeks with a single person in the team or as large as 3 months with 6-8 people in a team.
There are just three simple rules:
- Any person can start an Initiative. The Initiative should be approved by the Product Owner and the Technical Owner (we plan to use this approval mechanism for some time in order to check how the new approach goes). The Initiative should have a deadline defined by the Team.
- Any person can join any Initiative.
- Any person can leave an Initiative at any time.
A Source is the person who started the Initiative. He or she assembles the team, defines the main problem the Initiatives aims to solve, and is fully responsible for the Initiative's success. The Source can make all final functional decisions, technical decisions, etc. (Remember, Helpers are free to leave the Initiative at any time, so there is a mechanism to control poor leadership).
A Helper is a person who joins an Initiative and is committed to help complete it by the agreed deadline. He or she should focus on the Initiative and make it happen.
The Initiative deadline day is pretty significant. Two things should happen on the deadline day:
- The Source makes a company-wide demo. They show the results to the whole company and explain what the team has accomplished.
- The Initiative should be live on production.
As you see, freedom meets responsibility here. People are free to start Initiatives and work on almost anything, but they have to meet their deadlines and deliver the defined scope. This creates significant peer pressure, since you don't want to show bad results during the demo.
This process was started in July. We still have a few teams finalizing old features, but the majority of developers are working in the Initiatives mode now. Here's a screenshot of the Initiatives currently in progress:
The Initiatives in the Backlog are just markers; some of them will not go into development, and there is no priority here. Next is the Initiatives Kanban Board:
You may ask, how do we define what is most important? The answer is: it does not matter. If we have a real pain from customers, and we have a few people that really want to solve this problem — it will be solved. Nobody can dictate a roadmap, nobody can set priorities, even the Product Owner. The Product Owner can start their own Initiatives (if they can get enough Helpers) or decline some Initiatives (if it takes tooooo long or doesn't fit the strategic theme).
As a result, we don't have roadmaps at all. We don't discuss priorities. And we can't provide answers to your questions like "when will you have a better Git integration". We can only make promises about things already started (you can see some of them above). All the people inside our company care about making our customers happy with the product, and now they have been enabled with real power to react faster and help you.
We can also promise that Targetprocess will become easier, faster, and more useful with every new release.