Continuous Delivery for the Enterprise Accelerator
DevOps in Sitecore is a bit tricky to tackle. In traditional web applications, we normally just have to worry about the application code and any database updates which need to be deployed. With Sitecore, we have an additional ripple in the form of Content Items.
Separating Builds from Releases
Continuous Integration, Continuous Delivery, DevOps, Builds, Releases, Octopus, Versioning … all the technical mumbo-jumbo the developers talk about- right? It all starts with Builds and Releases.
A Build, simply put, is a sanity check of the code base. It should run automatically when developers make changes (or on a schedule). It shouldn’t install the code anywhere. It should compile the code. It should run the unit tests. It should perform any code analysis. And upon success, it should store the resulting artifacts. This is continuous integration: maintain a code repository, automate your build, and make your build self-testing.
A Release is the mechanism by which we take
artifacts and install them onto a server. Releases should not attempt to compile the solution, they should not be watching repositories for changes, and they should not attempt to run unit tests. A release can only execute upon a successful build, and its job is to take the resulting artifacts of that build and distribute them accordingly. When embraced properly, releases allow you to achieve Continuous Delivery.
Choosing your Toolset
To set up automation effectively, there are two things to know before we even talk about tools. You need to understand:
- What your application looks like from an architectural perspective. You can’t automate what you don’t understand.
- What your target environment looks like. How many servers are there? Load balancers? Firewalls? You can’t automate a moving target.
In my opinion, if you want to set up DevOps automation with Sitecore, you really need to understand how to work with Sitecore effectively. You also need to seriously think about how to best set up your solution for automation. If you’ve been following along with the series so far, then you’ve seen lots of configuration around our solutions and how we’re organizing the code. Most of this setup is what enables our DevOps strategies.
With that being said, here are the tools that I use:
- Team Development for Sitecore
- Sitecore Ship
- … and that’s it!
Team Development for Sitecore
Team Development for Sitecore has this nifty feature: Update Package Generation!
With this option selected, the TDS project will output a .update file as part of its compilation artifacts. These .update files are Update Packages for Sitecore that can be installed via the Update Installation Wizard [or via Sitecore Ship!]. Any Sitecore Items found within the package will be packaged up for installation into the target database (TDS projects can target different Sitecore databases). One important note: the screenshot above is for an environment that we call ‘PROD-CD’. It represents a content delivery server within our production infrastructure. For Content Delivery servers, we want to generate separate code and item packages, because we only want to push code (see the Package Generation Options in the screenshot above). For CM or Single environments, we can generate a single update package that has both Sitecore Items and application code.
To instruct TDS on which code to package, you select a ‘Source Web Project’. TDS uses this setting to also include the output of that corresponding C# project.
The final benefit of the TDS strategy revolves around environmental configuration. If you recall from Brian’s solution setup series, we use a dedicated class library to house environment specific configuration for Sitecore. We also never touch a native Sitecore configuration file. We always patch in our own configuration alphabetically last. You can see this in action via the File Replacement tab:
Bottom line is: per build profile, we can include configuration files from the different directories under our Environments project. I don’t know about you, but to me, this is superior to configuration transforms on top of Sitecore patch files.
Now, I know what you’re thinking… this is a lot to set up, especially for a Helix solution. I would advise you to not go down the path of each TDS project generating an update package. Instead, there’s another feature of TDS that you can use: Package Bundling. This is available from the Multi-project Properties tab. For our purposes, the idea is simple – take the output of each bundled project and include it as the output of the current project. This means that for your Helix-based Enterprise Layer, you can have a single .update package generated containing all modules.
After all of this setup, a compilation should now result in a .update file being generated as part of the output of your TDS project. This .update file is a critical key to continuous delivery.
Sitecore Ship is an open source module for Sitecore that allows for programmatic installation of .update packages through a Web API. This means that we can take the .update packages generated from TDS, and install them into Sitecore via a CURL command.
I don’t want to go too deeply into the weeds on this module, as there are already a ton of great resources on the web:
- Sitecore.Ship Repository
- Ruling the continuous integration seas with Sitecore.Ship – Part 1 and Part 2
- Continuous Delivery with Sitecore, TDS, Git, TeamCity, Octopus and Sitecore Ship – Part 1 and Part 2
I recommend you familiarize yourself with the documentation and installation steps. It’s quite simple to use, and very powerful when abused.
The Actual Build
Depending upon your build service of choice, these steps may change a bit. I’ve been using Visual Studio Team Services with one of our clients for the last year or so, and it’s a very beautiful and intuitive system. Brainjocks uses TeamCity for all in-house operations, and that also has the power to get the job done. Other clients are on Jenkins and have been able to successfully set up these practices with that system. Regardless of what you choose, however, the steps are basically the same:
- Check out the Code
- Restore Nuget packages
- Update Assembly Version Info (remember in Part 4 when we talked about AssemblyInfo.cs? Have your build system update that file programmatically to establish the build number being executed.)
- Compile against your target build profile
- Run your unit tests (if you’re using FakeDB, you’ll need a license.xml either checked into source control, or your build server can have a local copy that it sticks into place)
- Store the resulting .update files as artifacts of the build.
You can see what this may look like in VSTS from this screenshot:
As you can see, a build is quite simple. Visual Studio Team Services also has a marketplace of plugins that can be used. For instance, the
Update Assembly Info step above is not a native VSTS command. Instead, we downloaded this module from the marketplace and simply pointed it to our AssemblyInfo accordingly. In the VSTS example, we’re also storing the .update files from the resulting compilation as an artifact of the build. That’s what the Copy and Publish steps at the end are achieving.
Here’s a similar build from the perspective of Team City:
Regardless of the technology you select, there are a few critical concepts that you’ll want to take from project to project:
- Each tenant should have its own build for each environment, and they should all follow these same steps.
- I personally set up GitFlow for my repository, and I’ll use something like a
- Each branch, when committed to, initiates a different build accordingly.
Build Agent Considerations
The Build Agent is the actual computer which performs the compilation of your application. When using Visual Studio Team Services, you have the option to utilize cloud-hosted agents (e.g., ‘out of the box’ agents from Microsoft) or custom agents from your Azure resource pool. With TeamCity, Brainjocks uses Google Cloud servers to perform our compilations. Either way, you need to ensure that your build agent has the software required to perform a compilation. Because we’re using TDS projects, it means that we also need to have Team Development for Sitecore installed and configured on our build agent machines as well. In general, we install the following software on our build agents:
- Visual Studio 2017 (not necessary, but it installs most dependencies that you’ll need, so no real harm).
- Team Development for Sitecore
- Node.js’s Package Manager: NPM
- SASS compiler, either via Ruby Gems or through NPM. e.g.,
npm install -g sass
- LESS compiler, from NPM. e.g.,
npm install -g less
- CURL, a command line URL utility.
You may also choose to put RAZL on your build server if you wish to have jobs whose purpose is to synchronize content from one environment down to another. While this type of job does work, I find them to be incredibly slow, so I typically skip these.
Once our build completes, we can have a release (or subsequent build- depends on your system) that handles the actual movement of artifacts. This is where things differ for most organizations. There are a number of off-the-shelf tools such as Octopus Deploy that your organization may use, but for the purposes of this series, I’m going to assume you don’t have access to these tools.
From a VSTS perspective, this is quite simple:
- Send a notification that a release is occurring (optional)
- Ship the Core update package
- Ship the Master update package
- Clean up configuration
- Publish the instance
- Verify the site is awake
Now, remember how I mentioned that VSTS has a marketplace of modules? Sitecore Ship is one of those modules!
You can see that there are a few other modules that make working with Sitecore easy. For our purposes though, we’re interested in the Sitecore Ship module. Because this release is tied to the build we discussed earlier, it has access to the build’s artifacts. We stored the .update files as part of the artifacts, so now the Sitecore Ship commands can use those update files for installation.
One of the major pain points with Sitecore Ship and update packages, in general, is its inability to overwrite existing configuration files. When you install an update package that has a configuration file that conflicts with a file on the file system, the configuration file will get written to the file system with an additional guide appended as an extension. This is where the
Cleanup configuration files command comes in handy. Cleanup configuration files is simply a CURL operation:
curl -d -X POST https://mysite.com/score/configuration/cleanup
SCORE comes with an endpoint that helps us solve this. It’s located under /score/configuration/cleanup. If you’re not using SCORE, you’ll have to roll your own version of this.
For publishing changes automatically, Sitecore Ship provides an endpoint which can be used:
I typically set it up with a CURL command, like so:
curl -f -sS -F source=master -F targets=web -F languages=en https://www.mysite.com/services/publish/smart
Notice that there is a source, target, and language switch. You may need to adjust these depending upon how you’ve architected your CMS.
Finally, the verification is also a simple CURL command. Simply request the homepage and ensure that a 200 status code is returned accordingly.
But what about the Enterprise Layer?
The Enterprise Layer should follow a very similar pattern. Its builds and releases look exactly like a tenant, you just utilize different TDS projects. There are, however, a few considerations you should make before you go too far down this path:
Separate Enterprise Layer Development from Tenant Development
If I’m working on a tenant brand site, I expect the version of the enterprise layer that I’m using to be production ready, regardless of environment. Let’s assume we have three environments: INT, QA, and Production. In our paradigm, INT is used for continuous deployment to ensure that no build is breaking the server. Once we’re confident on our stability, we release our tenant site code to QA. From QA, content analysts can stage out pages and our QA team can test the content assembly process. For any new section of our website, we’ll build that in QA and then package it for installation in production. In production, we allow for our marketing team to manage the website.
Just because I’m working in QA on the tenant assembly side does not mean that I’m wanting to use a QA revision of the enterprise accelerator. The Enterprise Accelerator should be developed in its own lane with its own INT/QA servers following its own release cycles. Once the Enterprise Accelerator team is ready, they should then release a new production version of their enterprise layer. As a tenant website developer, I want that production version installed on all environments that I’m working with (from my local environment up through production).
This may feel weird at first- but think about it. Do you want an untested DEV version of SCORE installed on your local sandbox just because it’s the same ‘environment’? No. You don’t. You want that production ready version. Follow the same ideology here.
Distribute the Enterprise Layer via Nuget
The Enterprise Accelerator builds and releases should follow a similar pattern to the tenant builds and releases. But if I’m working locally on a tenant, how do I get the enterprise layer to install on my local machine? If you recall from the SCORE scaffolding video, SCORE installs itself into your local environment via Nuget. We can set up the same mechanism for the Enterprise Accelerator.
What you want to do is set up two additional steps on your Enterprise Production build: Nuget Pack, and Nuget Push. In VSTS this is super trivial as there are already steps available for you out of the box, and VSTS has the capability to host its own Nuget feeds. If you’re using something like TeamCity, it’s a bit more involved but can be done with the likes of tools such as MyGet.
To get SCORE to install from the Nuget command line, we use what’s called an
init.ps1 script. If you have a Powershell script located within your Nuget package at /tools/init.ps1, then it will get executed by Nuget upon successful installation of this Nuget package into a solution. SCORE also distributes a
UpdatePackageInstaller.asmx file and corresponding
Score.Automation.WebServices.dll file that the init.ps1 places into your sandbox directory. The UpdatePackageInstaller.asmx works in a similar fashion to Sitecore ship: it accepts a .update file being posted to it, and will subsequently install that .update package into the running instance of Sitecore.
So, let’s recap how the SCORE Nuget package works:
- Nuget package is installed into various projects within the solution.
- Since the package is being installed into a new solution,
/tools/init.ps1comes on and executes.
- init.ps1 finds all TDS projects in your application and checks them for a
- If init.ps1 finds those TDS projects, it uses the corresponding TDS settings to understand where your installation of Sitecore is located on the file system and Sitecore’s hostname that TDS communicates with.
- init.ps1 copies an UpdatePackageInstaller.asmx into your local sandbox.
- init.ps1 then installs SCORE .update packages into your local sandbox by posting them to the new UpdatePackageInstaller.asmx.
Again, this is one of those situations where if you pull down the SCORE Nuget feed, you can see these scripts. I highly recommend you copy and adjust them to your liking. From there, it’s very straightforward to get your Enterprise Accelerator to install itself similarly to SCORE.
First, I create a Package project and clone the init.ps1 and Package-Installer scripts into it.
If you look at the init.ps1 from SCORE, it has some assumptions on the names of the packages being installed. Just change and adjust it as needed, it’s very straightforward if you know Powershell. You’ll also notice that I copied the SCORE WebService dll and Update Package Installer endpoint into the project directly.
From here, all you need to do is include a .nuspec file (I like to name it after the project, makes the Nuget Pack step simpler in the build service):
<?xml version="1.0"?> <package > <metadata> <id>Enterprise</id> <version>$version$</version> <dependencies> <dependency id="Score.9.0.171219" version="18.104.22.168" /> <dependency id="Score.UI.9.0.171219" version="22.214.171.124" /> <dependency id="Score.BootstrapUI.9.0.171219" version="126.96.36.199" /> </dependencies> </metadata> <files> <file src="init.ps1" target="tools/init.ps1" /> <file src="Package-Installer/UpdatePackageInstaller.asmx" target="tools/installer" /> <file src="Package-Installer/Score.Automation.WebServices.dll" target="tools/installer" /> <!-- you may need a reference to all other projects to successfully copy .dll's from the entire Enterprise layer, or adjust the src pattern as necessary --> <file src="bin/Enterprise.*.dll" target="lib/net461" /> <file src="../path/to/Enterprise.Configuration.Core/Package_$configuration$/*.update" target="tools/packages" /> <file src="../path/to/Enterprise.Configuration.Master/Package_$configuration$/*.update" target="tools/packages" /> </files> </package>
You can see that the nuspec is quite simple. Include your init.ps1, your package installer service, all DLLs from your Enterprise layer, and the .update packages that result from a compilation. Don’t forget to set your SCORE dependency requirements (or SXA) as needed!
Also- remember how we set up versioning in the last post? init.ps1 assumes you’re following a similar pattern 🙂