In the presentation, I started with describing a simple flow, such as the flow of drinks at Starbucks.
If you are a coffee drinker who frequents Starbucks, like me, you probably appreciate that you don't have to wait for all the lattes, cappuccinos, etc ordered ahead of you to be made first. This is because the barista is working on those exclusively and the cashier can directly pour your coffee for you. Looking at the Kanban board above, my coffee goes from the "order" column to the "leave" column directly.
This benefits everyone. As I mentioned in the talk, a key metric for Starbucks is the customer cycle time: the amount of time it takes between walking in the door and when you walk out with your drink. The critical path for coffee drinkers and latte drinkers isn't the same, but it isn't entirely separate; Much as I personally would enjoy it, there is no separate cashier line for coffee drinkers. Starbucks has chosen not to optimize specifically for us, for good reason.
This is similar to the approach you might use for mixed asset types. Although every asset will have a large variation of effort needed and partially separate paths, measuring every asset's cycle time will still give us valuable information. The goal isn't to achieve a uniform cycle time for all assets; Just as people who order lattes should expect to wait longer at Starbucks than us super-efficient coffee drinkers.
Let's look at the Kanban board that shows various assets going through a production pipeline:A mixed asset Kanban board
This board includes assets that might need particle FX or animation applied to them, or neither. The important principles apply. We're going to measure the throughput and limit the work-in-progress (WiP) regardless of which steps are taken. Some assets will skip some steps like me skipping the barista. Doing this can improve the entire system. As a coffee drinker, I don't care how quickly the barista can make a latte, but I greatly appreciate when the under-tasked barista helps fill coffee orders.
This can happen in an asset production pipeline as well. As we measure throughput, we can create such policies in a production pipeline: Starbucks has far shorter coffee cycle times than barista-drink cycle times and that is fine for everyone. The key is to measure throughput for different asset classes and explore where and when improvements for classes can improve their cycle time without impacting the other classes.
Most production pipelines are far more complex than this, but the same principles apply. Start by simply modeling what you're doing now. Then measure throughput and reduce WiP.
...and don't be surprised that as you try to improve your existing overloaded hetrogenous pipeline that the conclusion you arrive at is that maybe the assumptions of the pipeline need an overhaul!
It's both. In sprint planning, the team creates an initial sprint backlog, which is a forecast of the tasks, or bits of work, that they feel represents the best path to achieving the sprint goal. The form this takes is up to them (hours, days, thrown chicken bone patterns, etc). They will refine how they create the backlog over time to improve the value of their forecasts.
The commitment part is more about a commitment to do their best to achieve the goal while maintaining quality.
The problem is that very often this commitment comes into conflict with the initial forecast. For example, one time I estimated it would take two days to implement drift-racing physics (with a handbrake control) into our vehicle dynamics model. I was able to do this, but it took another week to make it "fun", much of that time sitting next to a designer. This couldn't have been predicted and we could have stopped after two days and said "sprint goal achieved", but was it really?
At which point do we say, "it's good enough, time to move on"? That can't come from sprint planning. It has to come from the daily conversation with the team (including the product owner). Sometimes this results in the forecast growing and the team delivering a part of the goal that meets the quality bar.
This definition can scare managers that first hear about it and it's where they and teams struggle at first. This often comes from a culture that isn't prepared to trust developers to judge or achieve quality on their own and the inexperience of teams to be given this control. So the forecast becomes the commitment and the teams focus on making the hours look good, rather than the game. It takes time to establish the balance.
A commitment to quality at the expense of the forecast is the correct choice. It's very easy to cut quality to look good on paper, but it will bite you in the tail end. This doesn't mean we pursue the highest possible quality at all costs. That quality has to be arbitrated by execution and measurement. It has to be balanced with the needs of the customer. It shouldn't rule at all cost. My favorite example of "quality gone wild" comes from another driving game. As the prototypical product owner, I encountered an artist modeling window flower boxes throughout the city players were to race in. These required thousands of polygons and detailed textures to render. The flower boxes were beautiful and added much color, but based on the cost of creating and rendering them, they couldn't be justified, especially from the point of view of the player, who would be passing them at over 90 MPH.
So, we killed the window boxes, but it was a good lesson on our team's path to learn how to build "95 MPH art.
April 11-12 : Certified Scrum Product Owner for Video Game Development course. Details here.
May 16-17 : Certified ScrumMaster for Video Game Development course. Details here.
These courses focus on the two Scrum roles, which help guide creative teams to apply Scrum in video game development projects. Come discuss the application of Scrum for video games with the certified trainer who has 20 years of video game development experience, who introduced the industry to Scrum in 2003 and wrote the book.
The course is open to people who develop any type of product, but benefits those who work on products that have a more creative dimension.
In my spare time, I build various small devices using Arduino hardware or help my sons create small games. I enjoy building devices because I had a background in hardware development as well as software development before I became a full-time game developer 20 years ago.
Embedded development benefits as much from agile practices as pure-software development. I recently shared some tips on some of those:
- Find ways to iterate the hardware as well as the software. We found that reducing the time between software and bring-up paid dividends despite the cost of additional hardware development. More breadboarding/prototyping was a big benefit.
- Find ways to implement unit testing of the hardware as it's brought up and incrementally improved. Using an example device that controls a lights brightness: have a test that would send brightness commands to the hardware and allow someone to verify that the hardware is performing as expected at each level. Automation of this is nice, but not always possible.
- Find ways to ensure that interfaces are established, communicated and, if changed, easily and quickly communicated between hardware and software developers. This is usually not a big problem on small teams, but it is with larger teams. So, for example, if hardware changes the brightness control from an analog interface to a digital one, the change is reflected in the code and tested quickly.
- Encourage hardware and software engineers to overlap as much as possible. I like the phrase "generalizing specialists". One tip: don't play paintball as a team building exercise. We did that. The hardware engineers teamed up, figured out how to increase the shot velocity of their paintball markers, and gave all of us software engineers painful welts. It wasn't a good team building exercise. ;)
- If you have any sensors, motor controls, transmitters, receivers, etc that have to interface with the real, noisy world, try to test these as subsystems as early as possible in the target environment. One time we went out to sea on the first test of an underwater modem which we had simulated in Matlab and an enclosed water tank. The temperature inversion layer, multi-path and Doppler effects of the actual ocean environment demonstrated that we were very much farther away from completion than we thought. It was a bad day.