Inkscape PLC Rocketchat Logs

The monthly Inkscape PLC meeting transcripts are a way for those who can’t make the meeting to keep up with what is happening on the backend of the project. We don’t discuss a lot of development or marketing, those are done by separate teams, but just the business-y stuff. We have our meeting using the Inkscape Rocketchat server and then I download the logs and put them in the PLC Git repo. There it is put in a standard IRC log format (we did the meetings in IRC before we had Rocketchat) and rendered to HTML. You can see all the meeting transcripts listed on the Wiki page. While not super special, I’ve done it for a while and I don’t think how I do it has been posted anywhere. This blog post is solely to reduce the bus factor, hopefully no one will need it.

First step is to get the logs as JSON files using Rocketchat History Downloader. I’m not doing anything really special there, I’m executing it like: pipenv run python settings.cfg. In my settings I’ve configured my login along with pointing it at the Inkscape Rocketchat server. The other settings I’ve left at default. Probably room for optimization, but I’m only running once a month. For me all the history files are then in the history-files directory.

The next step is to turn the JSON into something similar to an IRC log. This is more trickey than it sounds and I’ve ended up with a small little shell script based on jq that I use to turn the JSON into an IRC log:

jq '.messages[]|(.ts|capture("(?<date>.*)\\.[0-9]+Z$").date+"Z"|fromdate|strftime("%Y-%m-%dT%H:%M:%S"))+" <"+ if .alias then .alias else .u.username end +"> "+.msg' | sed -e "s|^.||" -e "s|.$||" -e "s|\\\\\"|\"|g" | sort

It does a number of things. It reformats the date to match the IRC format. It handles the alias that is used by the Inkscape IRC bot for IRC users in the chat. It handles all the quoting and quoting of quoting that the JSON/jq pipeline gives it. And it sorts by the timestamp (which the downloader doesn’t and I find weird). I’m not going to go through the whole thing but you should ready the jq documentation if you’re curious about the individual items.

And that’s it folks. Not super hard but I wanted to make sure it was documented somewhere.

posted Sep 9, 2022 | permanent link

Moving to Vercel

Previously I talked about moving to Jekyll and statically generating my webpage/blog. I’m still a big fan of statically generated sites, but that world has grown up a lot with new features including edge functions which allow for some dyanmic functionality on an otherwise static site. So I’m moving my website from using Gitlab Pages and Cloudflare to being built and deployed using Vercel.

Besides edge functions one of the features I’m excited about in Vercel is their CI integration where they generate site previews on every branch. This makes it easier to test out a blog post (including this one) and make sure it looks sane before deploying it on the full site. It was definitely one of the drawbacks I saw from using static sites that they’ve elegantly fixed.

Lastly, a reason to use Vercel is that we’ve made a cool vercel observability integration over at Axiom. I like to be able to see exactly what some of our customers are experiencing and this gives me the opportunity to play with some of the same toys. Not sure my blog will ever generate enough data to really need a tool like Axiom, but there’s a “Hobby” tier in both that makes them both zero cost.

posted Aug 12, 2022 | permanent link

Defining an Inkscape Contributor

When Inkscape was started, it was a loose coalition of folks that met on the Internet. We weren’t really focused on things like governance, the governance was mostly who was an admin on SourceForge (it was better back then). We got some donated server time for a website and we had a few monetary donations that Bryce handled mostly with his personal banking. Probably one of our most valuable assets, our domain, was registered to and paid for by Mentalguy himself.

Realizing that wasn’t going to last forever we started to look into ways to become a legal entity as well as a great graphics program. We decided to join the (then much smaller) Software Freedom Conservancy which has allowed us to take donations as a non-profit and connected us to legal and other services to ensure that all the details are taken care of behind the scenes. As part of joining The Conservancy we setup a project charter, and we needed some governance to go along with that. This is where we officially established what we call “The Inkscape Board” and The Conservancy calls the Project Leadership Committee. We needed a way to elect that board, for which we turned to the AUTHORS file in the Inkscape source code repository.

Today it is clear that the AUTHORS file doesn’t represent all the contributors to Inkscape. It hasn’t for a long time and realistically didn’t when we established it. But it was easy. What makes Inkscape great isn’t that it is a bunch of programmers in the corner doing programmer stuff, but that it is a collaboration between people with a variety of skill sets bringing those perspectives together to make something they couldn’t build themselves.

Who got left out? We chose a method that had a vocational bias, it preferred people who are inclined to and enjoy computer programming. As a result translators, designers, technical writers, article authors, moderators, and others were left out of our governance. And because of societal trends we picked up both a racial and gender bias in our governance. Our board has never been anything other than a group of white men.

We are now taking specific actions to correct this in the Inkscape charter and starting to officially recognize the contributions that have been slighted by this oversight.

Our core of recognizing contributors has always been about peer-review with a rule we’ve called the “two patch rule.” It means that with two meaningful patches that are peer-reviewed and committed you’re allowed to have commit rights to the repository and added to the AUTHORS file. We want to keep this same spirit as we start recognize a wider range of contributions so we’re looking to make it the “two peers rule.” Here we’ll add someone to the list of contributors if two peers who are contributors say the individual has made significant contributions. Outside of the charter we expect each group of contributors will make a list of what they consider to be a significant contribution so that potential contributors know what to expect. For instance, for developers it will likely remain as patches.

We’re also taking the opportunity to build a process for contributors who move on to other projects. Life happens, interests change, and that’s a natural cycle of projects. But our old process which focused more on copyright of the code didn’t allow for contributors to be marked as retired. We will start to track who voted in elections (board members, charter changes, about screens, etc.) and contributors who fail to vote in two consecutive elections will be marked as retired. A retired contributor can return to active status by simply going through the “two peers rule.”

These are ideas to start the discussion, but we always want more input and ideas. Martin Owens will be hosting a video chat to talk about ideas surrounding how to update the Inkscape charter. Also, we welcome anyone to post on the mailing list for Inkscape governance.

As a founder it pains me to think of all the contributions that have gone unrecognized. Sure there were “thank yous” and beers at sprints, but that’s not enough. I hope this new era for Inkscape will see these contributions recognized and amplified so that Inkscape can continue to grow. The need for Free Software has only grown throughout Inkscape’s lifetime and we need to keep up!

posted Sep 8, 2021 | permanent link

Development in LXD

Most of my development is done in LXD containers. I love this for a few reasons. It takes all of my development dependencies and makes it so that they’re not installed on my host system, reducing the attack surface there. It means that I can do development on any Linux that I want (or several). But it also means that I can migrate my development environment from my laptop to my desktop depending on whether I need more CPU or whether I want it to be closer to where I’m working (usually when travelling).

When I’m traveling I use my Pagekite SSH setup on a Raspberry Pi as the SSH gateway. So when I’m at home I want to connect to the desktop directly, but when away connect through the gateway. To handle this I set up SSH to connect into the container no matter where it is. For each container I have an entry in my .ssh/config like this:

Host container-name
	User user
	IdentityFile ~/.ssh/id_container-name
	CheckHostIP no
	ProxyCommand ~/.ssh/ desktop-local %h

You’ll notice that I use a different SSH key for each container. They’re easy to generate and it is worth not reusing them, this is a good practice. Then for the ProxyCommand I have a shell script that’ll setup a connection depending on where the container is running, and what network my laptop is on.


set -e



ROUTER_IP=$( ip route get to | sed -n -e "s/.*via \(.*\) dev.*/\\1/p" )
ROUTER_MAC=$( arp -n ${ROUTER_IP} | tail -1 | awk '{print $3}' )


IP_COMMAND="lxc list --format csv --columns 6 ^${CONTAINER_NAME}\$ | head --lines=1 | cut -d ' ' -f 1"
NC_COMMAND="nc -6 -q0"

IP=$( bash -c "${IP_COMMAND}" )
if [ "${IP}" != "" ] ; then
	# Local
	exec ${NC_COMMAND} ${IP} 22

if [ "${HOME_ROUTER_MAC}" == "${ROUTER_MAC}" ] ; then

IP=$( echo ${IP_COMMAND} | ssh ${SSH_HOST} bash -l -s )

exec ssh ${SSH_HOST} -- bash -l -c "\"${NC_COMMAND} ${IP} 22\"" 

What this script does it that it first tries to see if the container is running locally by trying to find its IP:

IP_COMMAND="lxc list --format csv --columns 6 ^${CONTAINER_NAME}\$ | head --lines=1 | cut -d ' ' -f 1"

If it can find that IP, then it just sets up nc command to connect to the SSH port on that IP directly. If not, we need to see if we’re on my home network or out and about. To do that I check to see if the MAC address of the default router matches the one on my home network. This is a good way to check because it doesn’t require sending additional packets onto the network or otherwise connecting to other services. To get the router’s IP we look at which router is used to get to an address on the Internet:

ROUTER_IP=$( ip route get to | sed -n -e "s/.*via \(.*\) dev.*/\\1/p" )

We can then find out the MAC address for that router using the ARP table:

ROUTER_MAC=$( arp -n ${ROUTER_IP} | tail -1 | awk '{print $3}' )

If that MAC address matches a predefined value (redacted in this post) I know that it’s my home router, else I’m on the Internet somewhere. Depending on which case I know whether I need to go through the proxy or whether I can connect directly. Once we can connect to the desktop machine, we can then look for the IP address of the container off of there using the same IP command running on the desktop. Lastly, we setup an nc to connect to the SSH daemon using the desktop as a proxy.

exec ssh ${SSH_HOST} -- bash -l -c "\"${NC_COMMAND} ${IP} 22\"" 

What all this means so that I just type ssh contianer-name anywhere and it just works. I can move my containers wherever, my laptop wherever, and connect to my development containers as needed.

posted Jun 14, 2019 | permanent link

OAuth2 in the Shell

For some scripts at work I need to log into our Gitlab instance and use its API. To do that you need an OAuth2 token, and I wasn’t able to find any examples that I could crib from, so I’m posting what I made. Hopefully this’ll help you do the same for your scripts. I should mention that I’m using this with Gitlab as per their instructions, it might be slightly different for other OAuth implementors, but should be roughly the same.

First let’s just put the whole script out there before we break it down:


xdg-open "${OAUTH_CLIENT}&redirect_uri=http://localhost:${PORT}/&response_type=code" &> /dev/null

OAUTH_CODE=$( echo -e "HTTP/1.1 200 OK\n\n<HTML><body><blink>Thank you</blink></body></HTML>" | nc -l -p ${PORT} | sed -n "s/^GET.*code=\([a-fA-F0-9]*\).*/\1/p" ) 

if [ "${OAUTH_CODE}" == "" ] ; then
	echo "Unable to get OAUTH code"
	exit 1

OAUTH_TOKEN=$(curl -X POST -F "client_id=${OAUTH_CLIENT}" -F "client_secret=${OAUTH_SECRET}" -F "code=${OAUTH_CODE}" -F "grant_type=authorization_code" -F "redirect_uri=http://localhost:5000/" | jq --raw-output ."access_token" )

if [ "${OAUTH_TOKEN}" == "" ] ; then
	echo "Unable to get OAUTH token"
	exit 1

When you want to use an OAuth2 client with Gitlab the first thing you need to do register as a client, getting the OAUTH_CLIENT and OAUTH_SECRET strings. You’ll need to use the first one in the call to open the user’s browser.

xdg-open "${OAUTH_CLIENT}&redirect_uri=http://localhost:${PORT}/&response_type=code" &> /dev/null

The thing to notice in this call is that we’re using localhost for the redirect URL. That means that after (assuming they do) they authenticate the script it will redirect the browser back this host with the code needed to get the token. We then need a webserver running on this machine to get that code.

OAUTH_CODE=$( echo -e "HTTP/1.1 200 OK\n\n<HTML><body><blink>Thank you</blink></body></HTML>" | nc -l -p ${PORT} | sed -n "s/^GET.*code=\([a-fA-F0-9]*\).*/\1/p" ) 

For our webserver we’re using the trusty netcat to open a port and give us the data sent there. We go ahead and give the browser a nice webpage to say thanks (you know it’s a serious Thank You when you use the <blink> tag). The output you get from netcat is something like this:

GET /?code=123456789abcdef HTTP/1.1
Host: localhost:5000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Connection: keep-alive
Upgrade-Insecure-Requests: 1

So we use sed to pull out the code field by replacing the line with just the code and printing it. This gives us the code that we can then turn into a token.

OAUTH_TOKEN=$(curl -X POST -F "client_id=${OAUTH_CLIENT}" -F "client_secret=${OAUTH_SECRET}" -F "code=${OAUTH_CODE}" -F "grant_type=authorization_code" -F "redirect_uri=http://localhost:5000/" | jq --raw-output ."access_token" )

We set up a rather long curl call with several parameters that results in a JSON object that looks something like:

 "access_token": "de6780bc506a0446309bd9362820ba8aed28aa506c71eedbe1c5c4f9dd350e54",
 "token_type": "bearer",
 "expires_in": 7200,
 "refresh_token": "8257e65c97202ed1726cf9571600918f3bffb2544b26e00a61df9897668c33a1"

Which we then use jq to select the access_token and we’re good to go. Now we can use that token to access the Gitlab API as we need it.

posted May 8, 2019 | permanent link

All the older posts...