That is all people are talking about these days. Well, me too. It's also what all the best development shops are moving to now. Well, me too.
I participated in over a year long journey to bring Assembla into the Continuous Delivery world, and as any Continuous Delivery Afficionado would tell you, we continue to improve upon what we have learned. After all, that is the basis of Continuous Delivery, improving upon what you learn in real-time. As a matter of fact, it's the Key to Success with Continuous Delivery.
If I had to tell you one thing to do right now to start towards Continuous Delivery, it would be Production Monitoring. Useful for all systems, Production Monitoring is critical to your Continuous Delivery System, it allows you to react in real-time and gives you confidence to continue on.Wednesday May 22: 1800 UTC, 1400 EDT, 1100 PDT
Assembla and Airbrake will present a Webinar: Production Monitoring: The Key Step Towards Continuous Delivery to help answer questions about Continuous Delivery and Production Monitoring.
In this webinar, we will discuss:
- What is Continuous Delivery, and how can it produce faster feature releases, improved quality and higher customer satisfaction?
- How do Continuous Delivery and production monitoring fit together?
- How can you collect and use error data and feedback effectively?
- Can Continuous Delivery with production monitoring actually decrease developers' stress levels and increase stability?
To Learn more about Continuous Delivery, try these Blog Articles:
We announced our latest feature Server Side Hooks the other day. But before we even did that, something very cool happened, we got our first hook submitted by a contributor outside of Assembla. Thanks so much Jakub. Now users with Subversion repositories can install this hook and check their PHP code syntax.
We could never have had the time to think of creating nor actually implementing a solution for checking PHP code, because we would want to check all sorts of style of code and the scope would grow. Now users can scratch their own itch with minimal effort on our part.
For those of you still not sure what I am talking about: We are allowing customers to write their own Server Side Hooks and install them on our Servers, that’s right, you can extend Assembla’s cloud repository offering.
Thanks again and keep those hooks coming.
Does your ticket sidebar look like an endless groceries list? Are you looking for a way to organize stories and epics? Are you knocking your head trying to find your way through a sea of tickets? Then, I think you will be more than relieved to hear that we’ve released Tags for Tickets.
So, go to any ticket view and start tagging. It’s easy. Just click on the edit icon and enter a new tag or select from the existing list.
If you want to change which tags appear on the popup for users to select go to Tickets->Settings->Tag. Select Active if you want a tag to appear in the popup selector or else Hidden if you want to remove them from it.
We hope you enjoy this feature and start tagging right away! Let us if you have any comments or suggestions.
If you want to learn more about Assembla's Ticket and Issue Management System you can read more about it here.
In the past, Assembla search did not work very well. It did not match the search query to the type of searching that people were doing for a specific, recent item. When you’re using Assembla Search the odds are you are looking for a specific document, ticket, wiki or merge request and not for a wide range of information on a certain topic. Hence, how you query and what results are relevant differ from one case to the other.
To fix this issue, we introduced a series of changes that will hopefully make your life easier when using Assembla Search. Instead of having exact and non exact match mixed in the same list of results, now you can switch between one or the other. Just type your query with quotes to look for the exact match or use the “exact match” checkbox.
We’ve also changed default sort criteria to be by date so that more recent results appear on the top of the result list (don’t worry you can still choose by relevance if you need to). To make the UI cleaner, we’ve unified some result categories. Merge requests and Commit comments will now appear under the same tag “Merge requests”. As well, Tickets and Ticket comments results will appear under the tag “Tickets”.
We hope that with these changes you will use search more. I am using it a LOT more. If you have any suggestions or feedback, you can post a comment here.
Need help? Learn more about How Assembla Can Help You.
Cloud repository hosts have failed us. The power of hosting your repository locally is the ability to implement Server Side Hooks. These hooks allow you to control your repository and the source code contained within. Its super convenient for an organization with many contributors to a single repository. You can syntax check code, ensure commit messages are proper, add the power of automation or anything else you need your repository to do better than if you were relying on external webhooks.
To add a Server Side Hook in your current Assembla Repository - go to the Settings Page -> Server-Side Hooks:
Git: pre-receive, post-receive and update hooks
SVN: pre-commit, post-commit, pre-revprop-change and post-revprop-change hooks
Community Supported: Submit your own hooks or partake in the fruits of another’s labor
Prevent commits that do not comply with your Coding Standards
Validate commit messages for status updates and valid ticket reference
Create Workflows with specific status and ticket changes or kick off external procs
We are very excited about Server Side Hooks and hope that you find them as useful as we do. Take a look at some of our other available Repository Features
If you want to save time on a distributed team, this post is for you.
The best way to manage external teams and resources is the Space Manager Tool. Space Manager Tool is available for all Assembla Portfolios. Space Manager provides your team the ability to:
- Share Code with Limited Access
- Share Partial Ticket Lists
- Share a Prioritized Backlog
- Share Tools You Want
How does it work?
Let's start setting up a Master Space and add Child Spaces to them.
The Master Space is the container for the Child Spaces. Child Spaces are filtered spaces of the Master Space.
Say, you have an IT-outsourcing company "MasterSoft inc".
You provide services to a lot of customers. Let's take two of them: "SimpleJava" and "Soft.com".
Of course, you would like to keep customers informed. Also, Child Project members should have access to their company's content.
To achieve these goals you will setup a Master Space. All "MasterSoft" team members (PMs, Developers, Ops, etc) will become members of this space.
To add Master Space features to your Child Space use Space Manager Tool.
It can be installed from Admin->Tools:
Also, "MasterSoft" will need a Ticket Tool to post tasks for all customers.
Each customer company will have own repository in it's Child Space, so the Master Space will not require a repo tool.
At this point you will need Tags for each child project. Tags can be managed in the Ticket Tool Settings, under the "Tags" section.
Tags are also used to mark and filter tickets in the Simple Planner and Cardwall for Non-Child Spaces and Child Spaces without Filter.
In the Filtered Child Space this option is disabled, since it has already been applied.
To create a Child Space you will need to switch to the Space Manager tab:
Finally, setup a new Child Space:
As shown in the form, Tickets will be shared with SimpleJava space, but will be filtered by the Tag "SimpleJava".
This means that only tickets with this Tag will be displayed for the Child Space. Also, all the tickets, created from "SimpleJava" Child Space will be tagged with a "SimpleJava" tag. Nice, isn't it?
Now the only thing required, is to invite SimpleJava Team Members in the Child Space, and set up their role (member, watcher or owner).
Note, that all the roles from the Master Space will be delegated to the Child Space. So, all the Master Space members will be able to access all the Child Spaces with the same role, they have in Master Space.
The Child Space can be accessed directly from the Space Manager page or from the top bar dropdown navigation once you are in the Master Space.
It's time to talk about how the whole stuff works and what is the basic workflow, for the MasterSoft use case.
Bob is a QA specialist from "MasterSoft", who found a typo in the application, just deployed to the Google Play Market.
So Bob opens a new ticket from the Master Space, and tag it with "SimpleJava" tag.
Now all the developers who work with "SimpleJava" will be able to see the new ticket in the filtered Child Space, as well as SimpleJava team will be able to track the progress.
So what are the results?
- Tech Leads and Developers are more focused on their work in Child Spaces
- PM's can still see the list of tickets for all Feature Teams, filtered by some condition (say fix-today tasks) in Master Space, alongside with filtered Backlog in the child space.
- Your customers can track the Prioritized Filtered Backlog in the child space
- A customer is prevented from accessing other customer's content
- You can add any usual tool (Wiki, Messages, Repositories, e.t.c.) to the Child Space
- You can tag a ticket for more than one team, say "SimpleJava" and "Soft.com". This introduces even more flexibility into your process.
To find out more about Scaling Teams and Resources, you may want to read Beyond the Scrum Roadmap article.
If you haven't try Space Manager for your projects yet, you definitely should https://www.assembla.com/subscribe-portfolio.
Yesterday, this report showed only a daily cumulative quantity of tickets, we have improved upon it.
Do you use estimates on your daily work? - Good news, today you can choose "Ticket Estimate" from the “Type” select box.
You will be able to see the same Cumulative Flow chart, but this time you will see the sum estimation of your tickets (per status).
Which type of Estimating do you use?
- Do not use estimates: Default estimate value is 1.0, so you will get the same graph as Ticket Count.
- Show estimated total time: Estimate value is saved as a float value representing the total time, you will see a cumulative report of tickets total time.
- Show estimated points: In this case you can manually set points to each ticket, the result will be a summation over time of points in tickets.
- Estimate as T-Shirt sizes: (Small / Medium / Large): Same as estimated points but with predefined values.
- Small => 1.0 points.
- Medium => 3.0 points.
- Large => 7.0 points
We have been collecting this data for a week so far - full month Cumulative Flow Chart will be available in three weeks. If you use estimates on tickets, then this upgrade is for you. Enjoy!
Read more about how Assembla can help You.
You will need this tool if you are interested in continuous integration or continuous delivery. If you are adding continuous integration to your development process, you need tests. You need to get automated tests from your developers. If your developers are like my developers, some of them are enthusiastic about adding tests, and some of them don't bother. You probably work with smart, creative, opinionated individuals who don't always respond to requests exactly the way you want them to. You can flail around for months looking like a jerk, and still not get the tests and standards that you want.
Code review is a simple way to get what you want. You can send code changes to the team members who are enthusiastic about automated testing. They will review the changes and, when needed, ask for better test coverage and standards compliance. When we added this type of review to our process at Assembla, we got almost everything we wanted within the first week. The change was fast and permanent.
Protected branches give you a ONE CLICK way to get what you want:
* Mark your master or trunk branch (the branch you run CI on) as protected. Click. Now, users who are not on a special list cannot commit directly. They have to make merge requests, which will get reviewed.
* Git repositories give you a super simple way to enforce mandatory code review. The system will automatically take a push to a protected branch, and move it into a temporary branch with a merge request.
* Find the developers who support your initiative, and add them to the list of people who can write to the protected branch.
You can try training, incentives, and persuasion for months and not get the test coverage and standards compliance that you want. You will get them immediately with this simple tactic.
Protected branches are available for both Git and Subversion.
Assembla Merge Requests adds a great value to code development process. Now it’s time to bring some more automation to the Code Review process. And here comes a new version of Protected Branches for GIT.Old Flow
Previously, a feature development or a bugfix looked like:
- Create a new feature branch based on current production master
- Do some development and push code to new feature branch
- Go to Merge Requests Tab and create a Merge Request from new feature branch to master branch
Now, we have Protected Branches, where ‘master’ branch is protected by your Tech Leads, it looks as follows:
- No need to create a new feature branch, do development on master.
- Push code to origin master
What is going to happen behind the scenes when you push:
- A new temporary remote branch with ‘assembla-’ prefix will be created and your commits will be pushed to that branch, remote master will stay the same.
- A new merge request from this temporary branch to master will be created, you will see URL to MR in command line
This occurs in 2 cases:
- When you are a developer without write permission to ‘master’.
- When ‘Mandatory review’ checkbox is checked for ‘master’. In this case, all pushed code will go through review process, no matter if pusher has write permission or not.
To continue development on new automatic branch:
$ git fetch origin
$ git checkout assembla_1e65051188
$ echo “Hey, Merge Request Boss!” > WeContinueDevelopment # modify some file
$ git commit -a -m “We continue development to create new MR version”
$ git push origin assembla_1e65051188
We hope, you will enjoy the New Flow. To learn more about Assembla features or Sign Up for a Trial, click here.
I recently defined Continuous Integration as the practice of merging development work with a Master/Trunk/Mainline branch constantly so that you can test changes, and test that changes work with other changes, this is the "as early as possible" integration methodology. The idea here is to test your code as often as possible to catch issues early (Continuous Delivery vs Continuous Deployment vs Continuous Integration)
Watching a presentation by Jez Humble of ThoughtWorks, who defines Continuous Integration (CI) in relationship to Continuous Delivery, I realized that my definition was in direct opposition to 2 minutes of his presentation: http://www.youtube.com/watch?v=IBghnXBz3_w&feature=youtu.be&t=10m But why is Jez so adamant about these points, whereas I feel I am doing CI without everyone committing to a Mainline daily, with having Feature branches and often with working locally, making commits without having ALL integrated tests running. Can I be? I say yes, and the reason is because of where the integration points are and what kind of code is moving forward to Mainline - unstable code or stable code. These differences keep a clean stable Mainline which in turns gives the ability to deploy code to Production anytime. Jez’s definition of CI (Centralized CI) causes bottlenecks and is in direct contradiction with the process of Continuous Deployment (CD), whereas Distributed CI is able to remove these barriers while still giving confidence good code is moving to Production.Argument
The argument goes, that a Continuous Integration process is the process of constantly integrating all development work across the entire project into Mainline to detect issues early on in a code’s lifecycle. I argue that when utilizing Continuous Deployment, this is detrimental and the proper way is to integrate only Production ready code to Mainline, while merging Mainline back onto the isolated development work. Recently, I explained this concept to another developer and their response: “I have never thought about merging backwards from master to branches in order to run test, amazing”. Continuous Deployment is not achievable using traditional CI methodologies as described by ThoughtWorks and others because the bottlenecks will prevent the flow of code from developer to Production, but the simple notion of merging backwards to run tests in order to see a view of Mainline before you integrate upwards, will allow you to achieve Continuous DeploymentDaily Commits to Mainline
Checking in only stable code is necessary for Continuous Deployment, however it is in direct opposition of the Centralized CI evangelists’ rule #1: (http://www.youtube.com/watch?v=IBghnXBz3_w&feature=youtu.be&t=10m) you must check into Mainline daily. The idea here is that you are integrating with all development code daily, whether it is Production ready code or not. This means that development work must be hidden if released, but it also means that development work, that may be thrown out tomorrow, may not integrate properly with Production ready work today, causing an unstable Mainline. Any practice that forces Mainline to become unstable worries me a little, it just seems unnecessary.
In Distributed CI a developer commits locally to the developer branch and integrates often (daily) backwards from Mainline, saving the trouble of integration bottlenecks in Mainline. Anything a developer takes from Mainline in a Distributed CI process is considered stable and releasable code - maybe not immediately used code. Centralized CI may have a good build but still may have unreleasable code. You must coordinate the Centralized CI release, a Distributed CI release does not have this issue, since Mainline is always considered stable and a release may be performed at anytime.Broken Builds
Oh and lets not forget broken builds - broken builds prevent anyone from being able to reliably take code from Mainline or move code to Production from Mainline. Centralized CI has a way to “fix” this, rule #2: (http://eugenedvorkin.com/10-continuous-integration-practices/) never break the build, if you do you must fix it, and not leave until you do. OK, so nevermind the insanity of the first part of this rule, since the only way to ensure satisfying it is by never committing: I cannot control how my code integrates with other unknown code.. And sure, if I break something, I understand that I must fix it. But how do we know it was my code that broke the build and not yours, we don’t. But assume it is me who broke it, now everyone is waiting for me to fix it. I have now become a bottleneck and no code can move past Mainline, shoots, I guess Friday night beers are not in my future. And the pipeline from developer to Production has just halted dead in its tracks, no code can be Continuously Deployed. Some places call this a lesson (http://www.hanselman.com/blog/FirstRuleOfSoftwareDevelopment.aspx) and assume that you will learn from this ordeal, I think you will fear it, yes, but to work in fear . . . I prefer not to. I prefer to have systems that do require me to stay late to keep Mainline stable unduly, but rather a system that always ensures Mainline is stable, before and after my code is merged.
Distributed CI deals with this with a “mergeback” from Mainline to your developer branch - ensuring that you have a clean build after running unit tests on the branch, then merging up to Mainline. Otherwise, having failed a developer branch unit test - do not merge to Mainline, do not become the bottleneck, go out for beers, rest well, and fix it tomorrow. After the developer branch is merged to Mainline, do not run the unit tests again - the test suite was already run against a copy of Mainline in your development branch, already knowing that all tests have passed, the code is deployed right out to Production. Then Mainline is merged backwards to other developers’ branches, and unit tests are run individually on each developer branch. If any tests fail - the owner of that branch must fix the issue.Feature Branches
Rule #3 (http://www.youtube.com/watch?v=IBghnXBz3_w&feature=youtu.be&t=11m49s minutes 10-12 are definitely my favorite) - no feature branches. Wait what? Hold on, I like my feature branches. But Jez did just say: “But you can’t say you’re doing Continuous Integration and be doing feature branching. It’s just not possible by definition (while waving hands)”, didn’t he? Looking at the definition of CI from Martin Fowler of ThoughtWorks:
Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. (http://martinfowler.com/articles/continuousIntegration.html)
I do not actually see anywhere that says CI cannot have feature branches, just that all developers work must integrate frequently - not even daily. Well in Jez’s Centralized CI - feature branches are a no go - you must integrate daily ALL changes to a centralized Mainline. But hold on - this code can’t be integrated into Mainline, its weeks away from delivery. In distributed CI - a feature branch is nothing more than a developer’s branch. Feature branch away. Mainline will be integrated back into the Feature branch after any Production deploy and all new stable code will be integrated and tested immediately.
Hmmm, sounds like feature branching and CI are not mutually exclusive - nice. But Jez was very clear on this. Yes, his worry is about long running code not being integrated with. OK - so you want a true feature branch and not integrate up to Mainline for longer than today. Well you can. Since you are always integrating backwards from Mainline. But you must continually do this. Once we are ready to move the feature branch into Mainline - we already have a snapshot of what Mainline would look like, the feature branch is Mainline plus all new code of the feature branch, and it has been Continually Integrated with since creation every time Mainline is updated. If two developers want to integrate with each other, then they are able to in the Distributed CI World, they can actually be moved behind another integration point, where their work is merged up to it and then to Mainline. This is basically utilizing the ability to create a distributed network of developers working off of various localized Mainlines that merge up to a Master Mainline.Distributed CI Reigns Supreme with CD
Distributed CI has the advantage for Continuous Deployment because it keeps a clean and stable Mainline branch that can always be deployed to Production. During a Centralized CI process, an unstable Mainline will exist if code does not integrate properly (broken build) or if there is unfinished work integrated. This works quite well with iteration release planning, but creates a bottleneck for Continuous Deployment. The direct line from developer branch to Production must be kept clean in CD, Distributed CI does this by only allowing Production ready code to be put into the Mainline.Is Distributed CI really CI?
Yes. if you are implementing Distributed CI and CD together, you will always be integrating your developer code with the latest stable release to Production. If an integration test fails after the release to Production, it will fail for a developer’s branch, not everyone, so we know the integration failure is isolated to that developer’s branch and the new code; the developer whose failed branch must attend to it. In Centralized CI, the developers must discuss and work out this issue to see who is at fault. CD demands that you deliver code to Production often. In Distributed CI, anytime you deliver code to Production, the Mainline branch is merged back to developer branches and tested (thank goodness for automation). What does this mean, lets look at the life cycle of a commit a little closer:
The top diagram shows a Centralized CI process, where the developer merges into a Mainline integrated with other developers commits before running Unit Tests. Mainline cannot necessarily be deployed since some work from developers may not be Production ready. The lower diagram show a Distributed CI process where the developer merges Mainline backwards, then integrates up to Mainline, then pushes on through to Production.
If you are performing CD, you are releasing very often, each stable commit is still tested with every developer’s development work. Except in distributed CI - its tested individually, so detection of issues is easier - the amount of conflicting code is less, and isolated - to a developer’s branch. Distributed CI is a more thorough form of CI when implemented with CD - which attempts to break down processes into smaller automated pieces. Centralized CI is suited for iteration release planning, but distributed CI will work just as well for this and better. If you are doing centralized CI - then you are not doing Continuous Deployment. Dstributed CI removes all the bottlenecks and constraints placed on the workflow from developer to production that centralized CI requires.
So why all this confusion around CI and the artificial constraints put on it? Clearly, the centralized VCS is the basis for these constraints. Basically, Distributed CI is treating each developer branch like it is Mainline and testing there instead of up to a centralized Mainline. Perhaps its because CI grew up in a time of non-distributed VCS and has not modernized since. Now is the time to utilize the advantage of a distributed VCS. Unleash those feature branches, and commit code now, merge to Mainline to see it in Production moments later - yes, moments later, every time.
Anyone else performing CI like this, or have any other questions, please leave a comment below.
To read more about Continuous Deployment and how it can help you, check out Assembla and CD.
We are happy to announce that we have implemented an Access Control List (ACL) for Subversion directories. ACL workflow allows you to restrict directories so that only certain developers have write permission. This workflow can be enabled at critical times like when there is a feature freeze, or to protect sensitive areas of an application.
Those of you who store multiple projects in a single Subversion repository can now easily configure permissions for your project teams on the directory level. But enough talking, if there isn't a screenshot it didn't happen, right?
So how does it work?
- Specify users who will be able to write to certain directories - they will be the owners of that code
- Everyone else will be able to see, but if they want to contribute, they will have to send a merge request
- Directories a person is not allowed to write to are marked with a red lock icon.
- Directories that a person is an owner of are marked with a green lock icon.
When I first took on the role of CTO at Assembla, I researched Team Building and came upon the dreaded Corporate Retreat with a lava walk or lava flow exercise. It's a team building exercise, that quite frankly seems boring. Well I currently reside in Hawaii and we have a totally different take on the Lava Walk.
Here you see Ben Potts (DevOps extraordinaire), myself, and Nadia Romano (CX Engineer afficionado). These two along with another (Ramsey, not pictured, he snapped the photo) followed me into a lava field for a grueling 13 hour hike, it was meant to be only 2 hours. But once out in that field, the lava beckons you to continue on until you find it.The Treacherous Road
The terrain was tough. Lava once hardened is not like walking along pavement, so much as walking along pavement with broken bottles littered across it after an earthquake. You have to be careful where you step to say the least. Everything around you is black lava. The air is full of smoke and sulphur smells at times. And of course, we had limited water supplies, flashlights, and mosty appropriate clothing for the terrain (good hiking shoes, long pants, and an additional layer).
Fortunately we also chose to walk on a nearly Full Moon - which allowed us to not use the flashlights (as the batteries on our smartphones (the flashlight apps are great) were dying) and walk in the pale light of the moon, it was fantastic.
We made it to the lava fountains after 4 hours or so of walking, it was sheer magnificence. To watch lava ooze down the side of a fountain is a treat to see.
The video does it no justice, smartphones just can't capture the actual image that is unfolding 3 feet (~1 meter) in front of your face.Lessons Learned
An actual Lava Walk has many lessons to be learned hidden within it. Each person has a role to play:
- Leader, the one who sets the path and keeps the group moving
- Enthusiast, the one who remains positive and keeps the group motivated
- Stalwart, the one who continues on beyond all obstacles and triumphs
- Supporter, the one who helps others in their time of need
The role is not so much chosen as it is placed upon you. Sometimes the roles shift from person to person. I learned about these people while walking through this barren wasteland and I am sure they learned about me. Ben is always positive and moving forward. He wants to see things and make things happen. Nadia will accomplish any task no matter how hard. She is not afraid of an obstacle in her way. I learned that I am stubborn and when I should push people to move faster or harder and when I should allow them to move at their own pace.
All in all it was a great experience and I hope that I continue doing other Team Building exercises in the real world, maybe not so extreme next time. I do not encourage you to make a real Lava Walk as part of your corporate exercises. This time it ended well, only some bruised ankles and sore feet. I do have to apologize to Ben's wife for keeping him out so late and worrying her half to death:
Shannon - I sincerely apologize, it was all my fault that Ben did not come back until the morning. Sorry.
We did get to see a beautiful sunrise in the morning as we left the fields. It was a worthwhile adventure.Would I do it again?
Absolutely, I would, however not all people want to trudge through lava fields, perhaps just a Magic the Gathering Session would be a good Team Building exercise next time.
People do love that game. He sure does.
I have spoken often about the Assembla process that we have been working in and how well it works for us, but how can you get the same benefits for your project? Well, there is no secret, its just a matter of setting up a development fork and merge network. The basic idea is to isolate development work from stable work. By isolating, all development work from stable work you have a clear path from developer to production. Although this process is not specific to Assembla, lets see how this will would work utilizing Assembla. One could tailor this process to work in any system, and it would be just as powerful, though not as convenient as with the tools Assembla provides.
- Setup Development Space
First we need to setup a new space. You can do this from your Assembla start page by clicking the 'Create New Space' button. Depending on your account and plan, you might want to setup a public or private space. It does not matter which type of space you setup.
- Setup Origin Git Tool
Great, we now have a place to work within. We need to have a Git Tool, if your space configuration did not have a Git Tool, please add it from Admin tab and then the Tools Heading.
Whether you just created a tool, or it came pre-configured, I highly suggest renaming the Git Tool by going to the Setting subtab. Here you can rename the repository to “origin” and set an extension if you like, for the origin repo, I do not like adding an extension – I like the origin named “project.git”. I like calling it “origin” since this is standard git terminology for the original point of the code.
- Initial Commit to Origin
Now we have a Git Tool renamed to “origin” but no code in it. At this point, we can push code up to this remote repository and utilize it like we would normally utilize a git repo. Just to ensure everything is working, let's make sure we can read and write to the repo. Before you proceed ensure that you have added your public ssh key to your profile (found in the upper-right dropdown menu->Edit Profile->SSH Keys). Once your key is added, follow these instructions (replacing project.git with you appropriate project name) on your localhost to create a local repository that you will sync to the remote repository:
$ git clone email@example.com:project.git # will complain that you cloned
an empty repo
$ cd project
$ echo “Project Readme File” > Readme
$ git add Readme # add Readme file to git tracking
$ git commit -m “Initial commit of Readme file” Readme
$ git push origin master # create a master branch in “origin” repo
We have just made our first commit to origin/master. If you prefer you could use the http(s) method to interact with your git repo. Https is particularly useful behind a firewall that does not have port 22 open for ssh (many corporate firewalls are setup this way), while still encrypting your data. Http(s) will have more overhead and therefore slower than the ssh protocol. It does not update your localhost with information as often as the ssh protocol (sometimes it looks like it is hanging when it is working) because it waits between actions before responding to your client.
You can view this file that was just created in your code browser by navigating back to your Git Tool in Assembla and using the Browse Code subtab. To see the actual commit, take a look at the Commits subtab.
Hint: When committing add status updates such as fixed #123 or re #123 to reference the ticket in your commit message and to post a link and commit message on the ticket.
- Our First Developer Fork
Theory: One could just work in the origin repository. You can branch and commit to origin like you would any other repository, however, this can cause bottlenecks and problems for your development process. Say that developerA commits bad code to origin/master – they discover this through QA, UAT or CI, and developerB needs to perform a critical hotfix to Production. If all are committing to origin/master and deploying this to Production, then we have a bottleneck caused by developerA's bad commit. This commit must be reverted or possibly you could branch before the commit add developerB's commit to this branch, then deploy out to Production this branch. Ugh, that is dirty.
Another model may use release branches to help solve this. Release branches rely on the fact that no-one alters them between releases but takes overhead to cut them, coordinate developers and manage them. This becomes very hard to maintain across many, many developers and does not allow a free-flow of code from developer to Production, instead you must commit to several branches to ensure that you keep your Development branches and Release branches in sync. Often release branches become divergent from Development branches and you cannot patch them easily as there are different codebases and the merge results in conflicts or there is logic missing that is expected.
Instead, we introduce the Developer Fork. The Developer Fork is a repository that is based on origin, but allows the developer to maintain their own repository including branches, tags and commits. It also has one other nice advantage that is a little hard to see at first, but if managed properly, you will not run into strange merge problems with new work – all new work is applied on top of branches that are consistent with your current work. In other words, you will not get any of those annoying, you must merge before you can push errors.
To create the developer fork, go to your origin's Git Tool and then the Fork Network subtab, from here you can choose to fork your project. It does not matter if you do it to another space or within the same space. The considerations made to whether you will fork in the same space or to another space have to do with whether this is a new project being branched, if the developers will work off the same list of tickets, and if you have creation permissions in the space to create this fork amongst many other considerations. For our purposes and the purposes of most teams working on the same list of tickets, it is best to keep this fork in the same space – so that you can reference the tickets in the commits easily.
Once you have forked the repository go to the new Git Tool and rename it and give it an extension that matches the name in the Settings subtab. This will make it easier to talk about and remember where you are pushing code or merging to/from. For our purposes we will call it “fork”.
Advanced: Anytime you clone a repo, you are in fact creating a fork. You can then push this cloned repository to any remote repository. You may also connect any existing repository to any remote allowing you to maintain several remote repositories in a forked network in one local repo, you do not need to create a new tool or new space. If you already have two Git Tools in your Assembla space, you can add a “fork” from one repository to the other repository. Branches in git do not have to have a common ancestor nor do they even have to be related to each other; you can store completely different information in the same repository in different branches for git. So taking our origin/master setup in our space and a new Git Tool called “fork” and with the “fork” extension, we can clone this repository and then push a fork from origin back up to it. There are at least two ways to do it, each with different advantages:
# Using “origin” repository as your remote origin for your local repository –
typical for people who have read/write to origin/master
$ git clone firstname.lastname@example.org:project.git # should have Readme file from before
$ cd project
$ git remote add project_fork email@example.com:project.fork.git # add remote
$ git push project_fork master # create a master branch in the “fork” repo
# Using your “fork” repository as your remote origin for your local repository –
typical for developer setup
$ git clone firstname.lastname@example.org:project.fork.git # will complain of cloning
an empty repo
$ cd project.fork
$ git remote add project_origin email@example.com:project.git # add remote for
$ git checkout -b orig_master –track project_origin/master # switch to local
branch “orig_master” and track the origin/master remote repo
$ git push origin master # push the orig_master branch to the “fork” remote
(origin of this repo)
- Other Developer Forks
You can repeat the process above as many times as you like. I prefer to do this on a per developer basis. But you can easily setup a team that works in branches and has a common fork/master. This is the fork per team methodology. The advantage here is you have a single point of control for batching releases per team. However this same advantage is also a disadvantage where you have a bottleneck if a developer has committed code that is not clean or is waiting on other code.
Theory: The advantage of the fork per developer is that each developer is responsible for their own fork/master, and if they commit bad code or are waiting on other code, they do not block any other developer. This allows releases to occur more often on a per developer basis. If you are not in an environment that can sustain many releases very often, then you would arrange the fork network a little differently. Instead of developers working in the fork of origin, they will work in a fork of the fork – yes a fork of the fork. Just make the fork from the fork of origin, and you will have a repository that can merge easily back to the first level fork (use the fork network for this and just fork from the team fork to create a developers fork that will merge back into the team fork).
- Branching in Forks
Theory: In general it is good to create a new branch for all isolated work, i.e. each ticket that you work on should be in its own branch. This allows a developer to start and stop work, or wait while work is reviewed or QA'ed, but still moving forward with other work.
When working on ticket branches, which I like to call ticket_X where X is the ticket number, are disposable branches. Assembla has created a convenient way of maintaining disposable branches, in your Git Tool Settings subtab, you will see a section that allows you to denote the pattern for naming disposable branches.
Then when you Merge or Ignore a Merge Request that is based on this branch, the branch is automatically deleted from the repository – truly helps with keeping a clean Development Fork. It is important to understand that even if a branch is deleted, no code is lost, that is the beauty of git, all commits are still present and the branch can be recovered if necessary. Typically, the developer will also have these branches locally still and can push them back up.
Theory: The general rule of merging is that you only want to merge from where you branched or to where you branched. If you jump across branches, the merging will work, but it might have seemingly strange results or may affect files that you did not mean to affect. So merge from your fork/branch to your fork/master and from your fork/master to your origin/master but not from your fork/branch to origin/master. That is a simplistic view – I am assuming that fork/branch is always coming from fork/master – but this does not have to be true. And moreover, it will not be true when hotfixes are done.
When merging from a fork upstream to your origin or from a 2nd level fork upstream to the 1st level fork (fork per team), you can create a Merge Request from the Submit Code tab, choosing your fork/branch as the Source (“From”) and the fork/master as your Target (“To”).
- Code Review
Theory: All code going into your fork/master should be Production-ready code. To ensure this, most teams will need a code review process. The Code Review process in Assembla can be accomplished in the Changeset view, which you can find from the Commits subtab where a reviewer can add comments directly in-line with the code, but more appropriate it can be done from the Merge Request subtab where you will submit a Merge Request from your branch to your master. You will choose your fork/ticket_X branch as the source and fork/master as your target. Then you will set a title and description and have the opportunity to review the new code that is potentially being merged to your fork/master. This is a good time to review that the code is as you expect it, this will prevent any gotchas later.
Hint: Add ticket status updates in your description with the same format that you can use when committing to your repo, i.e. fixed #123 or fixed #tickets to affect all associated tickets, to apply this status update once your Merge Request is Merged.
Once the Merge Request is created from your fork/branch to your fork/master you will have an interface where the team can have a discussion, including the ability to leave @mentions to alert others to help with code review, a list of the changesets that will be merged, diffs on the files that are changed, and a list of tickets (based on commit messages that include #ticket_num) that are being affected by this Merge Request, in-line code commenting, and of course a place to vote on the Merge Request. If you give it a -1 vote, the system does require you to submit a comment, this is because it is typically not beneficial to just leave a -1 without any comment – whereas a +1 speaks for itself (I highly suggest leaving comments on what is proper in the code with a +1).
Hint: Setup your CI server to automatically submit votes to your merge requests utilizing the API and/or JAMRb to run Jenkins builds off of your Merge Request. Authenticate with the Assembla Jenkins Auth Plugin.
If the Merge Request is merged via the “Merge and Close” button, the system will automatically merge to the target repository and the disposable branch will be deleted (if applicable). If Ignored, the disposable branch will be deleted and the Merge Request will be archived.
Deploys should always run from origin/master unless you are doing some creative deploying where you want to test out branches in Production before merging them into origin/master (this is a highly advanced workflow and typically needs a level of coordination and architecture that most projects do not have nor can sustain as they grow). This means that origin/master should always be deployable and developers can trust that if they fork/branch or merge from origin/master it has good, Production-ready code.
I mentioned that hotfixes are a bit different from regular merging workflow, well it can be true if you need to make a critical patch immediately. In this case its best to work from a branch of origin/master, even if you only have read access from origin/master this can be accomplished, since it is guaranteed to be in sync with your current Production if you always deploy origin/master. Once you have your fix in your local branch, you can push it up to your fork as a branch and create a Merge Request from fork/hotfix_X to origin/master; since you branched from origin/master you should merge directly back to it. To see how this would work in a developer fork with a remote of origin/master called orig_project and a local branch of orig_master tracking origin/master:
$ git checkout orig_master # checkout the origin/master tracking branch on a
remote called orig_project (not the origin of the local repository)
$ git pull # make sure it is up to date
$ git branch hotfix_ticketNum # create new branch locally
$ git push origin hotfix_ticketNum # push hotfix_ticketNum branch up to fork
remote (called origin locally)
Now you can create a Merge Request from fork/hotfix_ticketNum to origin/master and have Code Review before merging and then deploy to Production.
So that is basically how you set up a fork network in git for developers to be able to isolate their work and merge back to origin/master and from origin/master to their development fork.
To Learn more about other workflows, click here.