To support continuous delivery, no human should have direct push permissions on your master
branch. If you develop on GitHub, the latest tag of this branch gets deployed when you create a release – which is hopefully very often, and very automated.
You’re already doing a great job of tracking future features and current bugs as issues (right?). To take a quick aside, an issue is a well-defined piece of work that can be merged to the main branch and deployed without breaking anything. It could be a new piece of functionality, a button component update, or a bug fix.
A short-lived branch-per-issue helps ensure that its resulting pull request doesn’t get too large, making it unwieldy and hard to review carefully. The definition of “short” varies depending on the team or project’s development velocity: for a small team producing a commercial app (like a startup), the time from issue branch creation to PR probably won’t exceed a week. For open source projects like the OWASP WSTG that depends on volunteers working around busy schedules, branches may live for a few weeks to a few months, depending on the contributor. Generally, strive to iterate in as little time as possible.
Here’s what this looks like practically. For an issue named (#28) Add user settings page, check out a new branch from master
:
# Get all the latest work locally
git checkout master
git pull
# Start your new branch from master
git checkout -b 28/add-settings-page
Work on the issue, and periodically merge master
to fix and avoid other conflicts:
# Commit to your issue branch
git commit ...
# Get the latest work on master
git checkout master
git pull
# Return to your issue branch and merge in master
git checkout 28/add-settings-page
git merge master
You may prefer to use rebasing instead of merging in master
. This happens to be my personal preference as well, however, I’ve found that people generally seem to have a harder time wrapping their heads around how rebasing works than they do with merging. Interactive rebasing can easily introduce confusing errors, and rewriting history can be confusing to begin with. Since I’m all about reducing cognitive load in developers’ processes in general, I recommend using a merge strategy.
When the issue work is ready to PR, open the request against master
. Automated tests run. Teammates review the work (using inline comments and suggestions if you’re on GitHub). Depending on the project, you may deploy a preview version as well.
Once everything checks out, the PR is merged, the issue is closed, and the branch is deleted.
Some common pitfalls I’ve seen that can undermine this flow are:
master
.master
. Not removing branches that are stale or have already been merged can cause confusion and make it more difficult than necessary to differentiate new ones.If this sounds like a process you’d use, or if you have anything to add, let me know via Webmention!
]]>With the general availability of GitHub Actions, we have a chance to programmatically access and preserve GitHub event data in our repository. Making the data part of the repository itself is a way of preserving it outside of GitHub, and also gives us the ability to feature the data on a front-facing website, such as with GitHub Pages, through an automated process that’s part of our CI/CD pipeline.
And, if you’re like me, you can turn GitHub issue comments into an awesome 90s guestbook page.
No matter the usage, the principle concepts are the same. We can use Actions to access, preserve, and display GitHub event data - with just one workflow file. To illustrate the process, I’ll take you through the workflow code that makes my guestbook shine on.
For an introductory look at GitHub Actions including how workflows are triggered, see A lightweight, tool-agnostic CI/CD flow with GitHub Actions.
An Action workflow runs in an environment with some default environment variables. A lot of convenient information is available here, including event data. The most complete way to access the event data is using the $GITHUB_EVENT_PATH
variable, the path of the file with the complete JSON event payload.
The expanded path looks like /home/runner/work/_temp/_github_workflow/event.json
and its data corresponds to its webhook event. You can find the documentation for webhook event data in GitHub REST API Event Types and Payloads. To make the JSON data available in the workflow environment, you can use a tool like jq
to parse the event data and put it in an environment variable.
Below, I grab the comment ID from an issue comment event:
ID="$(jq '.comment.id' $GITHUB_EVENT_PATH)"
Most event data is also available via the github.event
context variable without needing to parse JSON. The fields are accessed using dot notation, as in the example below where I grab the same comment ID:
ID=${{ github.event.comment.id }}
For my guestbook, I want to display entries with the user’s handle, and the date and time. I can capture this event data like so:
AUTHOR=${{ github.event.comment.user.login }}
DATE=${{ github.event.comment.created_at }}
Shell variables are handy for accessing data, however, they’re ephemeral. The workflow environment is created anew each run, and even shell variables set in one step do not persist to other steps. To persist the captured data, you have two options: use artifacts, or commit it to the repository.
Using artifacts, you can persist data between workflow jobs without committing it to your repository. This is handy when, for example, you wish to transform or incorporate the data before putting it somewhere more permanent.
Two actions assist with using artifacts: upload-artifact
and download-artifact
. You can use these actions to make files available to other jobs in the same workflow. For a full example, see passing data between jobs in a workflow.
The upload-artifact
action’s action.yml
contains an explanation of the keywords. The uploaded files are saved in .zip
format. Another job in the same workflow run can use the download-artifact
action to utilize the data in another step.
You can also manually download the archive on the workflow run page, under the repository’s Actions tab.
Persisting workflow data between jobs does not make any changes to the repository files, as the artifacts generated live only in the workflow environment. Personally, being comfortable working in a shell environment, I see a narrow use case for artifacts, though I’d have been remiss not to mention them. Besides passing data between jobs, they could be useful for creating .zip
format archives of, say, test output data. In the case of my guestbook example, I simply ran all the necessary steps in one job, negating any need for passing data between jobs.
To preserve data captured in the workflow in the repository itself, it is necessary to add and push this data to the Git repository. You can do this in the workflow by creating new files with the data, or by appending data to existing files, using shell commands.
To work with the repository files in the workflow, use the checkout
action to first get a copy to work with:
- uses: actions/checkout@master
with:
fetch-depth: 1
To add comments to my guestbook, I turn the event data captured in shell variables into proper files, using substitutions in shell parameter expansion to sanitize user input and translate newlines to paragraphs. I wrote previously about why user input should be treated carefully.
- name: Turn comment into file
run: |
ID=${{ github.event.comment.id }}
AUTHOR=${{ github.event.comment.user.login }}
DATE=${{ github.event.comment.created_at }}
COMMENT=$(echo "${{ github.event.comment.body }}")
NO_TAGS=${COMMENT//[<>]/\`}
FOLDER=comments
printf '%b\n' "<div class=\"comment\"><p>${AUTHOR} says:</p><p>${NO_TAGS//$'\n'/\<\/p\>\<p\>}</p><p>${DATE}</p></div>\r\n" > ${FOLDER}/${ID}.html
By using printf
and directing its output with >
to a new file, the event data is transformed into an HTML file, named with the comment ID number, that contains the captured event data. Formatted, it looks like:
<div class="comment">
<p>victoriadrake says:</p>
<p>This is a comment!</p>
<p>2019-11-04T00:28:36Z</p>
</div>
When working with comments, one effect of naming files using the comment ID is that a new file with the same ID will overwrite the previous. This is handy for a guestbook, as it allows any edits to a comment to replace the original comment file.
If you’re using a static site generator like Hugo, you could build a Markdown format file, stick it in your content/
folder, and the regular site build will take care of the rest. In the case of my simplistic guestbook, I have an extra step to consolidate the individual comment files into a page. Each time it runs, it overwrites the existing index.html
with the header.html
portion (>
), then finds and appends (>>
) all the comment files’ contents in descending order, and lastly appends the footer.html
portion to end the page.
- name: Assemble page
run: |
cat header.html > index.html
find comments/ -name "*.html" | sort -r | xargs -I % cat % >> index.html
cat footer.html >> index.html
Since the checkout
action is not quite the same as cloning the repository, at time of writing, there are some issues still to work around. A couple extra steps are necessary to pull
, checkout
, and successfully push
changes back to the master
branch, but this is pretty trivially done in the shell.
Below is the step that adds, commits, and pushes changes made by the workflow back to the repository’s master
branch.
- name: Push changes to repo
run: |
REMOTE=https://${{ secrets.GITHUB_TOKEN }}@github.com/${{ github.repository }}
git config user.email "${{ github.actor }}@users.noreply.github.com"
git config user.name "${{ github.actor }}"
git pull ${REMOTE}
git checkout master
git add .
git status
git commit -am "Add new comment"
git push ${REMOTE} master
The remote, in fact, our repository, is specified using the github.repository
context variable. For our workflow to be allowed to push to master, we give the remote URL using the default secrets.GITHUB_TOKEN
variable.
Since the workflow environment is shiny and newborn, we need to configure Git. In the above example, I’ve used the github.actor
context variable to input the username of the account initiating the workflow. The email is similarly configured using the default noreply
GitHub email address.
If you’re using GitHub Pages with the default secrets.GITHUB_TOKEN
variable and without a site generator, pushing changes to the repository in the workflow will only update the repository files. The GitHub Pages build will fail with an error, “Your site is having problems building: Page build failed.”
To enable Actions to trigger a Pages site build, you’ll need to create a Personal Access Token. This token can be stored as a secret in the repository settings and passed into the workflow in place of the default secrets.GITHUB_TOKEN
variable. I wrote more about Actions environment and variables in this post.
With the use of a Personal Access Token, a push initiated by the Actions workflow will also update the Pages site. You can see it for yourself by leaving a comment in my guestbook! The comment creation event triggers the workflow, which then takes around 30 seconds to run and update the guestbook page.
Where a site build is necessary for changes to be published, such as when using Hugo, an Action can do this too. However, in order to avoid creating unintended loops, one Action workflow will not trigger another (see what will). Instead, it’s extremely convenient to handle the process of building the site with a Makefile, which any workflow can then run. Simply add running the Makefile as the final step in your workflow job, with the repository token where necessary:
- name: Run Makefile
env:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: make all
This ensures that the final step of your workflow builds and deploys the updated site.
GitHub Actions provides a neat way to capture and utilize event data so that it’s not only available within GitHub. The possibilities are only as limited as your imagination! Here are a few ideas for things this lets us create:
Did I mention I made a 90s guestbook page? My inner-Geocities-nerd is a little excited.
]]>.bashrc
. This didn’t really do it justice, so here’s a quick post that offers a bit more detail about what the Bash configuration file can do.
My current configuration hugely improves my workflow, and saves me well over 50% of the keystrokes I would have to employ without it! Let’s look at some examples of aliases, functions, and prompt configurations that can improve our workflow by helping us be more efficient with fewer key presses.
A smartly written .bashrc
can save a whole lot of keystrokes. You can take advantage of this in the literal sense by using bash aliases, or strings that expand to larger commands. For an indicative example, here is a Bash alias for copying files in the terminal:
# Always copy contents of directories (r)ecursively and explain (v) what was done
alias cp='cp -rv'
The alias
command defines the string you’ll type, followed by what that string will expand to. You can override existing commands like cp
above. On its own, the cp
command will only copy files, not directories, and succeeds silently. With this alias, you need not remember to pass those two flags, nor cd
or ls
the location of our copied file to confirm that it’s there! Now, just those two key presses (for c
and d
) will do all of that for us.
Here are a few more .bashrc
aliases for passing flags with common functions.
# List contents with colors for file types, (A)lmost all hidden files (without . and ..), in (C)olumns, with class indicators (F)
alias ls='ls --color=auto -ACF'
# List contents with colors for file types, (a)ll hidden entries (including . and ..), use (l)ong listing format, with class indicators (F)
alias ll='ls --color=auto -alF'
# Explain (v) what was done when moving a file
alias mv='mv -v'
# Create any non-existent (p)arent directories and explain (v) what was done
alias mkdir='mkdir -pv'
# Always try to (c)ontinue getting a partially-downloaded file
alias wget='wget -c'
Aliases come in handy when you want to avoid typing long commands, too. Here are a few I use when working with Python environments:
alias pym='python3 manage.py'
alias mkenv='python3 -m venv env'
alias startenv='source env/bin/activate && which python3'
alias stopenv='deactivate'
For further inspiration on ways Bash aliases can save time, I highly recommend the examples in this article.
One downside of the aliases above is that they’re rather static - they’ll always expand to exactly the text declared. For a Bash alias that takes arguments, you’ll need to create a function. You can do this like so:
# Show contents of the directory after changing to it
function cd () {
builtin cd "$1"
ls -ACF
}
I can’t begin to tally how many times I’ve typed cd
and then ls
immediately after to see the contents of the directory I’m now in. With this function set up, it all happens with just those two letters! The function takes the first argument, $1
, as the location to change directory to, then prints the contents of that directory in nicely formatted columns with file type indicators. The builtin
part is necessary to get Bash to allow us to override this default command.
Bash functions are very useful when it comes to downloading or upgrading software, too.
Thanks to the static site generator Hugo’s excellent ship frequency, I previously spent at least a few minutes every couple weeks downloading the new extended version. With a Bash function, I only need to pass in the version number, and the upgrade happens in a few seconds.
# Hugo install or upgrade
function gethugo () {
wget -q -P tmp/ https://github.com/gohugoio/hugo/releases/download/v"$@"/hugo_extended_"$@"_Linux-64bit.tar.gz
tar xf tmp/hugo_extended_"$@"_Linux-64bit.tar.gz -C tmp/
sudo mv -f tmp/hugo /usr/local/bin/
rm -rf tmp/
hugo version
}
The $@
notation simply takes all the arguments given, replacing its spot in the function. To run the above function and download Hugo version 0.57.2, you use the command gethugo 0.57.2
.
I’ve got one for Golang, too:
function getgolang () {
sudo rm -rf /usr/local/go
wget -q -P tmp/ https://dl.google.com/go/go"$@".linux-amd64.tar.gz
sudo tar -C /usr/local -xzf tmp/go"$@".linux-amd64.tar.gz
rm -rf tmp/
go version
}
Or how about a function that adds a remote origin URL for GitLab to the current repository?
function glab () {
git remote set-url origin --add git@gitlab.com:"$@"/"${PWD##*/}".git
git remote -v
}
With glab username
, you can create a new origin
URL for the current Git repository with our username
on GitLab.com. Pushing to a new remote URL automatically creates a new private GitLab repository, so this is a useful shortcut for creating backups!
Bash functions are really only limited by the possibilities of scripting, of which there are, practically, few limits. If there’s anything you do on a frequent basis that requires typing a few lines into a terminal, you can probably create a Bash function for it!
Besides directory contents, it’s also useful to see the full path of the directory we’re in. The Bash prompt can show us this path, along with other useful information like our current Git branch. To make it more readable, you can define colours for each part of the prompt. Here’s how you can set up our prompt in .bashrc
to accomplish this:
# Colour codes are cumbersome, so let's name them
txtcyn='\[\e[0;96m\]' # Cyan
txtpur='\[\e[0;35m\]' # Purple
txtwht='\[\e[0;37m\]' # White
txtrst='\[\e[0m\]' # Text Reset
# Which (C)olour for what part of the prompt?
pathC="${txtcyn}"
gitC="${txtpur}"
pointerC="${txtwht}"
normalC="${txtrst}"
# Get the name of our branch and put parenthesis around it
gitBranch() {
git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'
}
# Build the prompt
export PS1="${pathC}\w ${gitC}\$(gitBranch) ${pointerC}\$${normalC} "
Result:
~/github/myrepo (master) $
Naming the colours helps to easily identify where one colour starts and stops, and where the next one begins. The prompt that you see in our terminal is defined by the string following export PS1
, with each component of the prompt set with an escape sequence. Let’s break that down:
\w
displays the current working directory,\$(gitBranch)
calls the gitBranch
function defined above, which displays the current Git branch,\$
will display a “$” if you are a normal user or in normal user mode, and a “#” if you are root.The full list of Bash escape sequences can help us display many more bits of information, including even the time and date! Bash prompts are highly customizable and individual, so feel free to set it up any way you please.
Here are a few options that put information front and centre and can help us to work more efficiently.
Username and current time with seconds, in 24-hour HH:MM:SS format:
export PS1="${userC}\u ${normalC}at \t >"
user at 09:35:55 >
Full file path on a separate line, and username:
export PS1="${pathC}\w${normalC}\n\u:"
~/github/myrepo
user:
export PS1=">"
>
We can build many practical prompts with just the basic escape sequences; once you start to integrate functions with prompts, as in the Git branch example, things can get really complicated. Whether this amount of complication is an addition or a detriment to your productivity, only you can know for sure!
Many fancy Bash prompts are possible with programs readily available with a quick search. I’ve intentionally not provided samples here because, well, if you can tend to get as excited about this stuff as I can, it might be a couple hours before you get back to what you were doing before you started reading this post, and I just can’t have that on my conscience. 🥺
We’ve hopefully struck a nice balance now between time invested and usefulness gained from our Bash configuration file! I hope you use your newly-recovered keystroke capacity for good.
]]>Caveat: you’ll need a list of the GitHub repositories you want to clone. The good thing about that is it gives you full agency to choose just the repositories you want on your machine, instead of going in whole-hog.
You can easily clone GitHub repositories without entering your password each time by using HTTPS with your 15-minute cached credentials or, my preferred method, by connecting to GitHub with SSH. For brevity I’ll assume we’re going with the latter, and our SSH keys are set up.
Given a list of GitHub URLs in the file gh-repos.txt
, like this:
git@github.com:username/first-repository.git
git@github.com:username/second-repository.git
git@github.com:username/third-repository.git
We run:
xargs -n1 git clone < gh-repos.txt
This clones all the repositories on the list into the current folder. This same one-liner works for GitLab repositories as well, if you substitute the appropriate URLs.
There are two halves to this one-liner: the input, counterintuitively on the right side, and the part that makes stuff happen, on the left. We could make the order of these parts more intuitive (maybe?) by writing the same command like this:
<gh-repos.txt xargs -n1 git clone
To run a command for each line of our input, gh-repos.txt
, we use xargs -n1
. The tool xargs
reads items from input and executes any commands it finds (it will echo
if it doesn’t find any). By default, it assumes that items are separated by spaces; new lines also works and makes our list easier to read. The flag -n1
tells xargs
to use 1
argument, or in our case, one line, per command. We build our command with git clone
, which xargs
then executes for each line. Ta-da.
GitLab, unlike GitHub, lets us do this nifty thing where we don’t have to use the website to make a new repository first. We can create a new GitLab repository from our terminal. The newly created repository defaults to being set as Private, so if we want to make it Public on GitLab, we’ll have to do that manually later.
The GitLab docs tell us to push to create a new project using git push --set-upstream
, but I don’t find this to be very convenient for using GitLab as a backup. As I work with my repositories in the future, I’d like to run one command that pushes to both GitHub and GitLab without additional effort on my part.
To make this Bash one-liner work, we’ll also need a list of repository URLs for GitLab (ones that don’t exist yet). We can easily do this by copying our GitHub repository list, opening it up with Vim, and doing a search-and-replace:
cp gh-repos.txt gl-repos.txt
vim gl-repos.txt
:%s/\<github\>/gitlab/g
:wq
This produces gl-repos.txt
, which looks like:
git@gitlab.com:username/first-repository.git
git@gitlab.com:username/second-repository.git
git@gitlab.com:username/third-repository.git
We can create these repositories on GitLab, add the URLs as remotes, and push our code to the new repositories by running:
awk -F'\/|(\.git)' '{system("cd ~/FULL/PATH/" $2 " && git remote set-url origin --add " $0 " && git push")}' gl-repos.txt
Hang tight and I’ll explain it; for now, take note that ~/FULL/PATH/
should be the full path to the directory containing our GitHub repositories.
We do have to make note of a couple assumptions:
master
.The one-liner could be expanded to handle these assumptions, but it is the humble opinion of the author that at that point, we really ought to be writing a Bash script.
Our Bash one-liner uses each line (or URL) in the gl-repos.txt
file as input. With awk
, it splits off the name of the directory containing the repository on our local machine, and uses these pieces of information to build our larger command. If we were to print
the output of awk
, we’d see:
cd ~/FULL/PATH/first-repository && git remote set-url origin --add git@gitlab.com:username/first-repository.git && git push
cd ~/FULL/PATH/second-repository && git remote set-url origin --add git@gitlab.com:username/second-repository.git && git push
cd ~/FULL/PATH/third-repository && git remote set-url origin --add git@gitlab.com:username/third-repository.git && git push
Let’s look at how we build this command.
awk
The tool awk
can split input based on field separators. The default separator is a whitespace character, but we can change this by passing the -F
flag. Besides single characters, we can also use a regular expression field separator. Since our repository URLs have a set format, we can grab the repository names by asking for the substring between the slash character /
and the end of the URL, .git
.
One way to accomplish this is with our regex \/|(\.git)
:
\/
is an escaped /
character;|
means “or”, telling awk to match either expression;(\.git)
is the capture group at the end of our URL that matches “.git”, with an escaped .
character. This is a bit of a cheat, as “.git” isn’t strictly splitting anything (there’s nothing on the other side) but it’s an easy way for us to take this bit off.Once we’ve told awk
where to split, we can grab the right substring with the field operator. We refer to our fields with a $
character, then by the field’s column number. In our example, we want the second field, $2
. Here’s what all the substrings look like:
1: git@gitlab.com:username
2: first-repository
To use the whole string, or in our case, the whole URL, we use the field operator $0
. To write the command, we just substitute the field operators for the repository name and URL. Running this with print
as we’re building it can help to make sure we’ve got all the spaces right.
awk -F'\/|(\.git)' '{print "cd ~/FULL/PATH/" $2 " && git remote set-url origin --add " $0 " && git push"}' gl-repos.txt
We build our command inside the parenthesis of system()
. By using this as the output of awk
, each command will run as soon as it is built and output. The system()
function creates a child process that executes our command, then returns once the command is completed. In plain English, this lets us perform the Git commands on each repository, one-by-one, without breaking from our main process in which awk
is doing things with our input file. Here’s our final command again, all put together.
awk -F'\/|(\.git)' '{system("cd ~/FULL/PATH/" $2 " && git remote set-url origin --add " $0 " && git push")}' gl-repos.txt
By adding the GitLab URLs as remotes, we’ve simplified the process of pushing to both externally hosted repositories. If we run git remote -v
in one of our repository directories, we’ll see:
origin git@github.com:username/first-repository.git (fetch)
origin git@github.com:username/first-repository.git (push)
origin git@gitlab.com:username/first-repository.git (push)
Now, simply running git push
without arguments will push the current branch to both remote repositories.
We should also note that git pull
will generally only try to pull from the remote repository you originally cloned from (the URL marked (fetch)
in our example above). Pulling from multiple Git repositories at the same time is possible, but complicated, and beyond the scope of this post. Here’s an explanation of pushing and pulling to multiple remotes to help get you started, if you’re curious. The Git documentation on remotes may also be helpful.
Bash one-liners, when understood, can be fun and handy shortcuts. At the very least, being aware of tools like xargs
and awk
can help to automate and alleviate a lot of tediousness in our work. However, there are some downsides.
In terms of an easy-to-understand, maintainable, and approachable tool, Bash one-liners suck. They’re usually more complicated to write than a Bash script using if
or while
loops, and certainly more complicated to read. It’s likely that when we write them, we’ll miss a single quote or closing parenthesis somewhere; and as I hope this post demonstrates, they can take quite a bit of explaining, too. So why use them?
Imagine reading a recipe for baking a cake, step by step. You understand the methods and ingredients, and gather your supplies. Then, as you think about it, you begin to realize that if you just throw all the ingredients at the oven in precisely the right order, a cake will instantly materialize. You try it, and it works!
That would be pretty satisfying, wouldn’t it?
]]>I’ve used Hugo to build my site for years, but until this past week I’d never hooked up my Pages repository to any deployment service. Why? Because using a tool that built my site before deploying it seemed to require having the whole recipe in one place - and if you’re using GitHub Pages with the free version of GitHub, that place is public. That means that all my three-in-the-morning bright ideas and messy unfinished (and unfunny) drafts would be publicly available - and no amount of continuous convenience was going to convince me to do that.
So I kept things separated, with Hugo’s messy behind-the-scenes stuff in a local Git repository, and the generated public/
folder pushing to my GitHub Pages remote repository. Each time I wanted to deploy my site, I’d have to get on my laptop and hugo
to build my site, then cd public/ && git add . && git commit
… etc etc. And all was well, except for the nagging feeling that there was a better way to do this.
I wrote another article a little while back about using GitHub and Working Copy to make changes to my repositories on my iPad whenever I’m out and about. It seemed off to me that I could do everything except deploy my site from my iPad, so I set out to change that.
A couple three-in-the-morning bright ideas and a revoked access token later (oops), I now have not one but two ways to deploy to my public GitHub Pages repository from an entirely separated, private GitHub repository. In this post, I’ll take you through achieving this with Travis CI or using Netlify and Make.
There’s nothing hackish about it - my public GitHub Pages repository still looks the same as it does when I pushed to it locally from my terminal. Only now, I’m able to take advantage of a couple great deployment tools to have the site update whenever I push to my private repo, whether I’m on my laptop or out and about with my iPad.
This article assumes you have working knowledge of Git and GitHub Pages. If not, you may like to spin off some browser tabs from my articles on using GitHub and Working Copy and building a site with Hugo and GitHub Pages first.
Let’s do it!
Travis CI has the built-in ability (♪) to deploy to GitHub Pages following a successful build. They do a decent job in the docs of explaining how to add this feature, especially if you’ve used Travis CI before… which I haven’t. Don’t worry, I did the bulk of the figuring-things-out for you.
.travis.yml
travis
on the command linerepo
configuration variable.Create a new configuration file for Travis with the filename .travis.yml
(note the leading “.”). These scripts are very customizable and I struggled to find a relevant example to use as a starting point - luckily, you don’t have that problem!
Here’s my basic .travis.yml
:
git:
depth: false
env:
global:
- HUGO_VERSION="0.54.0"
matrix:
- YOUR_ENCRYPTED_VARIABLE
install:
- wget -q https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_${HUGO_VERSION}_Linux-64bit.tar.gz
- tar xf hugo_${HUGO_VERSION}_Linux-64bit.tar.gz
- mv hugo ~/bin/
script:
- hugo --gc --minify
deploy:
provider: pages
skip-cleanup: true
github-token: $GITHUB_TOKEN
keep-history: true
local-dir: public
repo: gh-username/gh-username.github.io
target-branch: master
verbose: true
on:
branch: master
This script downloads and installs Hugo, builds the site with the garbage collection and minify flags, then deploys the public/
directory to the specified repo
- in this example, your public GitHub Pages repository. You can read about each of the deploy
configuration options here.
To add the GitHub personal access token as an encrypted variable, you don’t need to manually edit your .travis.yml
. The travis
gem commands below will encrypt and add the variable for you when you run them in your repository directory.
First, install travis
with sudo gem install travis
.
Then generate your GitHub personal access token, copy it (it only shows up once!) and run the commands below in your repository root, substituting your token for the kisses:
travis login --pro --github-token xxxxxxxxxxxxxxxxxxxxxxxxxxx
travis encrypt GITHUB_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxx --add env.matrix
Your encrypted token magically appears in the file. Once you’ve committed .travis.yml
to your private Hugo repository, Travis CI will run the script and if the build succeeds, will deploy your site to your public GitHub Pages repo. Magic!
Travis will always run a build each time you push to your private repository. If you don’t want to trigger this behavior with a particular commit, add the skip
command to your commit message.
Yo that’s cool but I like Netlify.
Okay fine.
We can get Netlify to do our bidding by using a Makefile, which we’ll run with Netlify’s build command.
Here’s what our Makefile
looks like:
SHELL:=/bin/bash
BASEDIR=$(CURDIR)
OUTPUTDIR=public
.PHONY: all
all: clean get_repository build deploy
.PHONY: clean
clean:
@echo "Removing public directory"
rm -rf $(BASEDIR)/$(OUTPUTDIR)
.PHONY: get_repository
get_repository:
@echo "Getting public repository"
git clone https://github.com/gh-username/gh-username.github.io.git public
.PHONY: build
build:
@echo "Generating site"
hugo --gc --minify
.PHONY: deploy
deploy:
@echo "Preparing commit"
@cd $(OUTPUTDIR) \
&& git config user.email "you@youremail.com" \
&& git config user.name "Your Name" \
&& git add . \
&& git status \
&& git commit -m "Deploy via Makefile" \
&& git push -f -q https://$(GITHUB_TOKEN)@github.com/gh-username/gh-username.github.io.git master
@echo "Pushed to remote"
To preserve the Git history of our separate GitHub Pages repository, we’ll first clone it, build our new Hugo site to it, and then push it back to the Pages repository. This script first removes any existing public/
folder that might contain files or a Git history. It then clones our Pages repository to public/
, builds our Hugo site (essentially updating the files in public/
), then takes care of committing the new site to the Pages repository.
In the deploy
section, you’ll notice lines starting with &&
. These are chained commands. Since Make invokes a new sub-shell for each line, it starts over with every new line from our root directory. To get our cd
to stick and avoid running our Git commands in the project root directory, we’re chaining the commands and using the backslash character to break long lines for readability.
By chaining our commands, we’re able to configure our Git identity, add all our updated files, and create a commit for our Pages repository.
Similarly to using Travis CI, we’ll need to pass in a GitHub personal access token to push to our public GitHub Pages repository - only Netlify doesn’t provide a straightforward way to encrypt the token in our Makefile.
Instead, we’ll use Netlify’s Build Environment Variables, which live safely in our site settings in the Netlify app. We can then call our token variable in the Makefile. We use it to push (quietly, to avoid printing the token in logs) to our Pages repository by passing it in the remote URL.
To avoid printing the token in Netlify’s logs, we suppress recipe echoing for that line with the leading @
character.
With your Makefile in the root of your private GitHub repository, you can set up Netlify to run it for you.
Getting set up with Netlify via the web UI is straightforward. Once you sign in with GitHub, choose the private GitHub repository where your Hugo site lives. The next page Netlify takes you to lets you enter deploy settings:
You can specify the build command that will run your Makefile (make all
for this example). The branch to deploy and the publish directory don’t matter too much in our specific case, since we’re only concerned with pushing to a separate repository. You can enter the typical master
deploy branch and public
publish directory.
Under “Advanced build settings” click “New variable” to add your GitHub personal access token as a Build Environment Variable. In our example, the variable name is GITHUB_TOKEN
. Click “Deploy site” to make the magic happen.
If you’ve already previously set up your repository with Netlify, find the settings for Continuous Deployment under Settings > Build & deploy.
Netlify will build your site each time you push to the private repository. If you don’t want a particular commit to trigger a build, add [skip ci]
in your Git commit message.
One effect of using Netlify this way is that your site will be built in two places: one is the separate, public GitHub Pages repository that the Makefile pushes to, and the other is your Netlify site that deploys on their CDN from your linked private GitHub repository. The latter is useful if you’re going to play with Deploy Previews and other Netlify features, but those are outside the scope of this post.
The main point is that your GitHub Pages site is now updated in your public repo. Yay!
I hope the effect of this new information is that you feel more able to update your sites, wherever you happen to be. The possibilities are endless - at home on your couch with your laptop, out cafe-hopping with your iPad, or in the middle of a first date on your phone. Endless!
]]>To take full advantage of these bits of time, I needed a solution that let me pick up work on my Git repositories wherever I happen to be. That means a remote sync solution that bridges my iOS devices (iPad and iPhone) and my Linux machine.
After a lot of trial and error, I’ve found one that works really well. With synced Git repositories on iOS, I can seamlessly pick up work for any of my repositories on the go.
Here are the steps to setting up that I’ll walk you through in this article.
This system is straightforward to set up whether you’re a command line whiz or just getting into Git. Let’s do it!
Create a public or private repository on GitHub.
If you’re creating a new repository, you can follow GitHub’s instructions to push some files to it from your computer, or you can add files later from your iOS device.
Download Working Copy from the App Store. It’s a fantastic app. Developer Anders Borum has a steady track record of frequent updates and incorporating the latest features for iOS apps, like drag and drop on iPad. I think he’s fairly priced his product in light of the work he puts into maintaining and enhancing it.
In Working Copy, find the gear icon in the top left corner and touch to open Settings.
Tap on SSH Keys, and you’ll see this screen:
SSH keys, or Secure Shell keys, are access credentials used in the SSH protocol. Your key is a password that your device will use to securely connect with your remote repository host - GitHub, in this example. Since anyone with your SSH keys can potentially pretend to be you and gain access to your files, it’s important not to share them accidentally, like in a screenshot on a blog post.
Tap on the second line that looks like WorkingCopy@iPad-xxxxxxxx to get this screen:
Working Copy supports easy connection to GitHub. Tap Connect With GitHub to bring up some familiar sign-in screens that will authorize Working Copy to access your account(s).
Once connected, tap the + symbol in the top right of the side bar to add a new repository. Choose Clone repository to bring up this screen:
Here, you can either manually input the remote URL, or simply choose from the list of repositories that Working Copy fetches from your connected account. When you make your choice, the app clones the repository to your device and it will show up in the sidebar. You’re connected!
One of the (many) reasons I adore iA Writer is its ability to select your freshly cloned remote repository as a Library Location. To enable this, first open your Files app. On the Browse screen, tap the overflow menu (three dots) in the top right and choose Edit.
Turn on Working Copy as a location option:
Then in the iA Writer app:
Your remote repository now appears as a Location in the sidebar. Tap on it to work within this directory.
While inside this location, new files you create (by tapping the pencil-and-paper icon in the top right corner) will be saved to this folder locally. As you work, iA Writer automatically saves your progress. Next, we’ll look at pushing those files and changes back to your remote.
Once you’ve made changes to your files, open Working Copy again. You should see a yellow dot on your changed repository.
Tap on your repository name, then on Repository Status and Configuration at the top of the sidebar. Your changed files will be indicated by yellow dots or green + symbols. These mean that you’ve modified or added files, respectively.
Working Copy is a sweet iOS Git client, and you can tap on your files to see additional information including a comparison of changes (“diff”) as well as status and Git history. You can even edit files right within the app, with syntax highlighting for its many supported languages. For now, we’ll look at how to push your changed work to your remote repository.
On the Repository Status and Configuration page, you’ll see right at the top that there are changes to be committed. If you’re new to Git, this is like “saving your changes” to your Git history, something typically done with the terminal command git commit
. You can think of this as saving the files that we’ll want to send to the GitHub repository. Tap Commit changes.
Enter your commit message, and select the files you want to add. Toggle the Push switch to send everything to your remote repository when you commit the files. Then tap Commit.
You’ll see a progress bar as your files are uploaded, and then a confirmation message on the status screen.
Congratulations! Your changes are now present in your remote repository on GitHub. You’ve successfully synced your files remotely!
To bring your updated files full circle to your computer, you pull them from the GitHub repository. I prefer to use the terminal for this as it’s quick and easy, but GitHub also offers a graphical client if terminal commands seem a little alien for now.
If you started with the GitHub repository, you can clone it to a folder on your computer by following these instructions.
When you update your work on your computer, you’ll use Git to push your changes to the remote repository. To do this, you can use GitHub’s graphical client, or follow these instructions.
On your iOS device, Working Copy makes pulling and pushing as simple as a single tap. On the Repository Status and Configuration page, tap on the remote name under Remotes.
Then tap Synchronize. Working Copy will take care of the details of pushing your committed changes and/or pulling any new changes it finds from the remote repository.
For a Git-based developer and work-anywhere-aholic like me, this set up couldn’t be more convenient. Working Copy really makes staying in sync with my remote repositories seamless, nevermind the ability to work with any of my GitHub repos on the go.
I most recently used this set up to get some writing done while hanging out in the atrium of Washington DC’s National Portrait Gallery, which is pleasantly photogenic.
Happy working! If you enjoyed this post, there’s a lot more where this came from! I write about computing, cybersecurity, and leading great technical teams. You can subscribe to see new articles first.
]]>Here’s how you can create and maintain a clean and orderly Git commit history using message templates, learning how to squash commits, using git stash, and creating annotated commit tags.
Whether our code will be seen by the entire open source community or just future versions of ourselves, either one will be grateful if we commit responsibly today. Being responsible can mean a lot of things to different people, so I enlisted some of mastodon.technology
(instance shut down since) and dev.to to help round out my list. From those (really great) threads, I distilled these main points:
Committing responsibly
- Provide and/or use tests to avoid committing bugs or broken builds
- Write clean code that meets style specifications
- Use descriptive commit messages that reference related discussion
- Make only one change per commit and avoid including unrelated changes
Some of the above is achieved through maintaining a short feedback loop that helps you improve your code quality while staying accountable to yourself. I wrote another article that discusses this in detail, especially the part about code review. Other items on this list have to do specifically with making commits in Git. There are some features of Git that can benefit us in these areas, as can harnessing tools like Vim. I’ll cover those topics here.
If the majority of your Git commits so far have been created with something like git commit -m "Bug fixes"
then this is the article for you!
I think Linus would be very happy if we didn’t use git commit -m "Fix bug"
in a public repository ever again. As very well put in this classic post and the seven rules of a great Git commit message:
A properly formed Git commit subject line should always be able to complete the following sentence:
If applied, this commit will your subject line here
This other classic post also discusses three questions that the body of the commit message should answer:
Why is it necessary? How does it address the issue? What effects does the patch have?
This can be a lot to remember to cover, but there’s a slick way to have these prompts at hand right when you need it. You can set up a commit message template by using the commit.template
configuration value.
To set it, configure Git to use a template file (for example, .gitmessage
in your home directory), then create the template file with Vim:
git config --global commit.template ~/.gitmessage
vim ~/.gitmessage
When we run git commit
without the -m
message flag, the editor will open with our helpful template ready to go. Here’s my commit message template:
## If applied, this commit will...
## [Add/Fix/Remove/Update/Refactor/Document] [issue #id] [summary]
## Why is it necessary? (Bug fix, feature, improvements?)
-
## How does the change address the issue?
-
## What side effects does this change have?
-
I’m a fan of this format because commented lines are not included in the final message. I can simply fill in the blank lines with text and bullet points under the prompts, and it comes out looking something like this:
Fix #16 missing CSS variables
- Fix for unstyled elements
- Add background color, height for code blocks
- Only affects highlight class
Issue trackers in GitHub and Bitbucket both recognize the keywords close
, fix
, and resolve
followed immediately by the issue or pull request number. These keywords conveniently help us close the referenced issue or pull request, and this helps maintain a clear trail of changes. GitLab, and issue trackers like Jira offer similar functionalities.
By adding a few lines to our Vim configuration, we can make writing great git commit messages easy. We can add these lines to ~/.vimrc
to turn on syntax highlighting in general, and spell check and text wrapping for commit messages in particular:
" Filetype detection, plugins, and indent rules
filetype plugin indent on
" Syntax highlighting
syntax on
" Spell check and line wrap just for git commit messages
autocmd Filetype gitcommit setlocal spell textwidth=72
If you’re curious, you can find my full ~/.vimrc
in my dotfiles.
Other editors have settings that can help us out as well. I came across these for Sublime Text 3 and language specific settings for VS Code.
Let’s get one thing out of the way first: rewriting Git history just for the sake of having a pretty tree, especially with public repositories, is generally not advisable. It’s kind of like going back in time, where changes you make to your version of the project cause it to look completely different from a version that someone else forked from a point in history that you’ve now erased - I mean, haven’t you seen Back to the Future Part II? (If you’d rather maintain that only one Back to the Future movie was ever made, thus sparing your future self from having to watch the sequels, I get it.)
Here’s the main point. If you’ve pushed messy commits to a public repository, I say go right ahead and leave them be, instead of complicating things further. (We all learn from our embarrassments, especially the public ones - I’m looking at you, past-Vicky.) If your messy commits currently only exist on your local version, great! We can tidy them up into one clean, well-described commit that we’ll be proud to push, and no one will be the wiser.
There are a couple different ways to squash commits, and choosing the appropriate one depends on what we need to achieve.
The following examples are illustrated using git log --graph
, with some options for brevity. We can set a handy alias to see this log format in our terminal with:
git config --global alias.plog "log --graph --pretty=format:'%h -%d %s %n' --abbrev-commit --date=relative --branches"
Then we just do git plog
to see the pretty log.
This is appropriate when:
This method takes a Git tree that looks like this:
* 3e8fd79 - (HEAD -> master) Fix a thing
|
* 4f0d387 - Tweak something
|
* 0a6b8b3 - Merge branch 'new-article'
|\
| * 33b5509 - (new-article) Update article again again
| |
| * 1782e63 - Update article again
| |
| * 3c5b6a8 - Update article
| |
* | f790737 - (master) Tweak unrelated article
|/
|
* 65af7e7 Add social media link
|
* 0e3fa32 (origin/master, origin/HEAD) Update theme
And makes it look like this:
* 7f9a127 - (HEAD -> master) Add new article
|
* 0e3fa32 - (origin/master, origin/HEAD) Update theme
Here’s how to do it - hold on to your hoverboards, it’s super complicated:
git reset --soft origin/master
git commit
Yup that’s all. We can delete the unwanted branch with git branch -D new-article
.
This is appropriate when:
origin/master
This method takes a Git tree that looks like this:
* 13a070f - (HEAD -> new-article) Finish new article
|
* 78e728a - Edit article draft
|
* d62603c - Add example
|
* 1aeb20e - Update draft
|
* 5a8442a - Add new article draft
|
| * 65af7e7 - (master) Add social media link
|/
|
* 0e3fa32 - (origin/master, origin/HEAD) Update theme
And makes it look like this:
* 90da69a - (HEAD -> new-article) Add new article
|
| * 65af7e7 - (master) Add social media link
|/
|
* 0e3fa32 - (origin/master, origin/HEAD) Update theme
To squash the last five commits on branch new-article
into one, we use:
git reset --soft HEAD~5
git commit -m "New message for the combined commit"
Where --soft
leaves our files untouched and staged, and 5
can be thought of as “the number of previous commits I want to combine.”
We can then do git merge master
and create our pull request.
Say we had a really confusing afternoon and our Git tree looks like this:
* dc89918 - (HEAD -> master) Add link
|
* 9b6780f - Update image asset
|
* 6379956 - Fix CSS bug
|
* 16ee1f3 - Merge master into branch
|\
| |
| * ccec365 - Update list page
| |
* | 033dee7 - Fix typo
| |
* | 90da69a - Add new article
|/
|
* 0e3fa32 - (origin/master, origin/HEAD) Update theme
We want to retain some of this history, but clean up the commits. We also want to change the messages for some of the commits. To achieve this, we’ll use git rebase
.
This is appropriate when:
Git rebase
is a powerful tool, and handy once we’ve got the hang of it. To change all the commits since origin/master
, we do:
git rebase -i origin/master
Or, we can do:
git rebase -i 0e3fa32
Where the commit hash is the last commit we want to retain as-is.
The -i
option lets us run the interactive rebase tool, which launches our editor with, essentially, a script for us to modify. We’ll see a list of our commits in reverse order to the git log, with the oldest at the top:
pick 90da69a Add new article
pick 033dee7 Fix typo
pick ccec365 Update list page
pick 6379956 Fix CSS bug
pick 9b6780f Update image asset
pick dc89918 Add link
# Rebase 0e3fa32..dc89918 onto 0e3fa32 (6 commands)
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out
#
~
The comments give us a handy guide as to what we’re able to do. For now, let’s squash the commits with small changes into the more significant commits. In our editor, we change the script to look like this:
pick 90da69a Add new article
squash 033dee7 Fix typo
pick ccec365 Update list page
squash 6379956 Fix CSS bug
squash 9b6780f Update image asset
squash dc89918 Add link
Once we save the changes, the interactive tool continues to run. It will execute our instructions in sequence. In this case, we see the editor again with the following:
# This is a combination of 2 commits.
# This is the 1st commit message:
Add new article
# This is the commit message #2:
Fix typo
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# interactive rebase in progress; onto 0e3fa32
# Last commands done (2 commands done):
# pick 90da69a Add new article
# squash 033dee7 Fix typo
# Next commands to do (4 remaining commands):
# pick ccec365 Update list page
# squash 6379956 Fix CSS bug
# You are currently rebasing branch 'master' on '0e3fa32'.
#
# Changes to be committed:
# modified: ...
#
~
Here’s our chance to create a new commit message for this first squash, if we want to. Once we save it, the interactive tool will go on to the next instructions. Unless…
[detached HEAD 3cbad01] Add new article
1 file changed, 129 insertions(+), 19 deletions(-)
Auto-merging content/dir/file.md
CONFLICT (content): Merge conflict in content/dir/file.md
error: could not apply ccec365... Update list page
Resolve all conflicts manually, mark them as resolved with
"git add/rm <conflicted_files>", then run "git rebase --continue".
You can instead skip this commit: run "git rebase --skip".
To abort and get back to the state before "git rebase", run "git rebase --abort".
Could not apply ccec365... Update list page
Again, the tool offers some very helpful instructions. Once we fix the merge conflict, we can resume the process with git rebase --continue
. Our interactive rebase picks up where it left off.
Once all the squashing is done, our Git tree looks like this:
* 3564b8c - (HEAD -> master) Update list page
|
* 3cbad01 - Add new article
|
* 0e3fa32 - (origin/master, origin/HEAD) Update theme
Phew, much better.
If we’re in the middle of some work and it’s not a good time to commit, but we need to switch branches, stashing can be a good option. Stashing lets us save our unfinished work without needing to create a half-assed commit. It’s like that pile of paper on your desk representing all the stuff you’ve been in the middle of doing since two weeks ago. Yup, that one.
It’s as easy as typing git stash
:
Saved working directory and index state WIP on master: 3564b8c Update list page
The dirty work we’re in the midst of is safely tucked away, and our working directory is clean - just as it was after our last commit. To see what’s in our stash stack, we do git stash list
:
stash@{0}: WIP on master: 3564b8c Update list page
stash@{1}: WIP on master: 90da69a Add new article
stash@{2}: WIP on cleanup: 0e3fa32 Update theme
To restore our work in progress, we use git stash apply
. Git will try and apply our most recent stashed work. To apply an older stash, we use git stash apply stash@{1}
where 1
is the stash to apply. If changes since stashing our work prevent the stash from reapplying cleanly, Git will give us a merge conflict to resolve.
Applying a stash doesn’t remove it from our list. To remove a stash from our stack, we do git stash drop stash@{0}
where 0
is the one we want to remove.
We can also use git stash pop
to apply the most recent stash and then immediately remove it from the stack.
In the spirit of having a beautiful, clean Git history, there’s one more thing we can do to help make our commit log inspire infinite joy in its viewers. If you’ve never heard of git tag
, your master branch history might look like this…
* 0377782 - Update theme
|
* ecf8128 - Add about page (#25)
|
* 33e432f - Fix #23 navigation bug
|
* 08b853b - Create blog section
|
* 63d18b4 - Add theme (#12)
|
* 233e23f - Add main content (#6)
Wouldn’t it be nice if it looked like this instead?
* 0377782 - (tag: v2.1.0) Update theme
|
* ecf8128 - Add about page (#25)
|
* 33e432f - Fix #23 navigation bug
|
* 08b853b - (tag: v2.0.0) Create blog section
|
* 63d18b4 - Add theme (#12)
|
* 233e23f - (tag: v1.1.0) Add main content (#6)
We can tag Git commits with anything, but tags are especially helpful for semantic versioning of releases. Sites like GitHub and GitLab have pages for repositories that list tags, letting viewers of our project browse the release versions. This can be helpful for public projects to differentiate major releases, updates with bug fixes, or beta versions.
There are two types of Git tags: lightweight and annotated. For adding a version tag to commits, we use annotated Git tags.
The Git tag documentation explains it this way:
Tag objects (created with -a, -s, or -u) are called “annotated” tags; they contain a creation date, the tagger name and e-mail, a tagging message, and an optional GnuPG signature. Whereas a “lightweight” tag is simply a name for an object (usually a commit object).
Annotated tags are meant for release while lightweight tags are meant for private or temporary object labels. For this reason, some git commands for naming objects (like git describe) will ignore lightweight tags by default.
We can think of lightweight tags as bookmarks, and annotated tags as signed releases.
For public repositories, annotated tags allow us to:
git describe
To create an annotated Git tag and attach it to our current (last) commit, we do:
git tag -a v1.2.0 -m "Clever release title"
This tags the commit on our local repository. To push all annotated tags to the remote, we do:
git push --follow-tags
We can also set our Git configuration to push our annotated tags by default:
git config --global push.followTags true
If we then want to skip pushing tags this time, we pass --no-follow-tags
.
A little time invested in getting familiar with these tools and practices can make your commits even more useful and well-crafted. With a little practice, these processes will become second nature. You can make it even easier by creating a personal commit checklist on paper to keep handy while you work - or if that isn’t fun enough, make it an interactive pre-commit hook.
Creating clean, useful, and responsible Git commits says a lot about you. Especially in remote work, Git commits may be a primary way that people interact with you over projects. With a little practice and effort, you can make your commit habits an even better reflection of your best work - work that is evidently created with care and pride.
If you enjoyed this post, there’s a lot more where it came from! I write about computing, cybersecurity, and leading great technical teams. Subscribe on victoria.dev to see new articles first, and check out the ones below!
]]>Did you know that nearly 1 out of 5 coders are too embarrassed to ask this question? Don’t worry, it’s perfectly normal. In the next 60 seconds we’ll tell you all you need to know to pre-commit with confidence.
A Git hook is a feature of Git that triggers custom scripts at useful moments. They can be used for all kinds of reasons to help you automate your work, and best of all, you already have them! In every repository that you initialize with git init
, you’ll have a set of example scripts living in .git/hooks
. They all end with .sample
and activating them is as easy as renaming the file to remove the .sample
part.
Git hooks are not copied when a repository is cloned, so you can make them as personal as you like.
The useful moment in particular that we’ll talk about today is the pre-commit. This hook is run after you do git commit
, and before you write a commit message. Exiting this hook with a non-zero status will abort the commit, which makes it extremely useful for last-minute quality checks. Or, a bit of fun. Why not both!
I only want the best for my family and my commits, and that’s why I choose an interactive pre-commit checklist. Not only is it fun to use, it helps to keep my projects safe from unexpected off-spec mistakes!
It’s so easy! I just write a bash script that can read user input, and plop it into .git/hooks
as a file named pre-commit
. Then I do chmod +x .git/hooks/pre-commit
to make it executable, and I’m done!
Oh look, here comes an example bash script now!
#!/bin/sh
echo "Would you like to play a game?"
# Read user input, assign stdin to keyboard
exec < /dev/tty
while read -p "Have you double checked that only relevant files were added? (Y/n) " yn; do
case $yn in
[Yy] ) break;;
[Nn] ) echo "Please ensure the right files were added!"; exit 1;;
* ) echo "Please answer y (yes) or n (no):" && continue;
esac
done
while read -p "Has the documentation been updated? (Y/n) " yn; do
case $yn in
[Yy] ) break;;
[Nn] ) echo "Please add or update the docs!"; exit 1;;
* ) echo "Please answer y (yes) or n (no):" && continue;
esac
done
while read -p "Do you know which issue or PR numbers to reference? (Y/n) " yn; do
case $yn in
[Yy] ) break;;
[Nn] ) echo "Better go check those tracking numbers!"; exit 1;;
* ) echo "Please answer y (yes) or n (no):" && continue;
esac
done
exec <&-
Don’t delay! Take advantage right now of this generous one-time offer! An interactive pre-commit hook checklist can be yours, today, for the low, low price of… free? Wait, who wrote this script?
]]>