Cogito, Ergo Sumana

Categories: sumana | Open Source and Free Culture:Advice

Advice to open stuff newbies, maintainers, and other contributors.


(1) : Inclusive-Or: Hospitality in Bug Tracking:

Lindsey Kuper asked:

I’m interested in hearing about [open source software] projects that have successfully adopted an "only insiders use the issue tracker" approach. For instance, a project might have a mailing list where users discuss bugs in an unstructured way, and project insiders distill those discussions into bug reports to be entered into the issue tracker. Where does this approach succeed, and where does it fail? How can projects that operate this way effectively communicate their expectations to non-insider users, especially those users who might be more accustomed to using issue trackers directly?
More recently, Jillian C. York wrote:

...sick of "just file a bug with us through github!" You realize that's offputting to your average users, right?

If you want actual, average users to submit bugs, you know what you have to do: You have to use email. Sorry, but it's true.

Oh, and that goes especially for high-risk users. Give them easy ways to talk to you. You know who you are, devs.

Both Kuper and York get at: How do we open source maintainers get the bug reports we need, in a way that works for us and for our users?

My short answer is that open source projects should have centralized bug trackers that are as easy as possible to work in as an expert user, and that they should find automated ways to accept bug reports from less structured and less expert sources. I'll discuss some examples and then some general principles.

Dreamwidth logo Dreamwidth: Dreamwidth takes support questions via a customer support interface. The volunteers and paid staff answering those questions sometimes find that a support request reveals a bug, and then file it in GitHub on the customer's behalf, then tell her when it's fixed. (Each support request has a private section that only Support can see, which makes it easier to track the connection between Support requests and GitHub issues, and Support regulars tend to have enough ambient awareness of both Support and GitHub traffic to speak up when relevant issues crop up or get closed.) Dreamwidth users and developers who are comfortable using the GitHub issue tracker are welcomed if they want to file bugs there directly instead.

Dreamwidth also has a non-GitHub interface for feature suggestions: the suggestions form is the preferred interface for people to suggest new features for Dreamwidth. Users post their suggestions into a queue and a maintainer chooses whether to turn that suggestion into a post for open discussion in the dw-suggestions community, or whether to bounce it straight into GitHub (e.g., for an uncontroversial request to whitelist a new site for media embedding or add a new site for easy cross-site user linking, or at the maintainer's prerogative). Once a maintainer has turned a suggestion into a post, other users use an interface familiar to them (Dreamwidth itself) to discuss whether they want the feature. Then, if they and the maintainer come to consensus and approve it, the maintainer adds a ticket for it to GitHub. That moderation step has been a bottleneck in the past, and the process of moving a suggestion into GitHub also hasn't yet been automated.

Since discussion about site changes needs to include users who aren't developers, Dreamwidth maintainers prefer that people use the suggestions form; experienced developers sometimes start conversations in GitHub, but the norm (at least the official norm) is to use dw-suggestions; I think the occasional GitHub comment suffices for redirecting these discussions.

Zulip logo Zulip: We use GitHub issues. The Zulip installations hosted by Kandra Labs (the for-profit company that stewards the open source project) also have a "Send feedback" button in one of the upper corners of the Zulip web user interface. Clicking this opens a private message conversation with feedback-at-zulip.com, which users used more heavily when the product was younger. (We also used to have a nice setup where we could actually send you replies in-Zulip, and may bring that back in the future.)

I often see Tim Abbott and other maintainers noticing problems that new users/customers are having and, while helping them (via the zulip-devel mailing list, via the Zuliping-about-Zulip chat at chat.zulip.org, or in person), opening GitHub issues about the issue, as the next step towards a long-term fix. But -- as with the Dreamwidth example -- it is also fine for people who are used to filing bug reports or feature requests directly to go ahead and file them in GitHub. And if Tim et alia know that the person they're helping has that skill and probably has the time to write up a quick issue, then the maintainers will likely say, "hey would you mind filing that in GitHub?"

We sometimes hold live office hours at chat.zulip.org. At yesterday's office hour, Tim set up a discussion topic named "warts" and said,

I think another good topic is to just have folks list the things that feel like they're some of our uglier/messier parts of the UI that should be getting attention. We can use this topic to collect them :).

Several people spoke up about little irritations, and we ended up filing and fixing multiple issues. One of Zulip's lead developers, Steve Howell, reflected: "As many bug reports as we get normally, asking for 'warts' seems to empower customers to report stuff that might not be considered bugs, or just empower them to speak up more." I'd also point out that some people feel more comfortable responding to an invitation in a synchronous conversation than initiating an asynchronous one -- plus, there's the power of personal invitation to consider.

As user uptake goes up, I hope we'll also have more of a presence on Twitter, IRC, and Stack Overflow in order to engage people who are asking questions there and help them there, and get proto-bug reports from those platforms to transform into GitHub issues. We already use our Twitter integration to help -- if someone mentions Zulip in a public Tweet, a bot tells us about it in our developers' livechat, so we can log into our Twitter account and reply to them.

MediaWiki logo 1MediaWiki and Wikimedia: Wikipedia editors and other contributors have a lot of places they communicate about the sites themselves, such as the technical-issues subforum of English Wikipedia's "Village Pump", and similar community-conversation pages within other Wikipedias, Wikivoyages, etc. Under my leadership, the team within Wikimedia Foundation's engineering department that liaised with the larger Wikimedia community grew more systematic about working with those Wikimedia spaces where users were saying things that were proto-bug reports. We got more systematic about listening for those complaints, filing them as bugs in the public bug tracker, and keeping in touch with those reporters as bugs progressed -- and building a kind of ambassador community to further that kind of information dissemination. (I don't know how well that worked out; I think we built a better social infrastructure for people who were already doing that kind of volunteer work ad hoc, but I don't know whether we succeeded in recruiting more people to do it, and I haven't kept a close eye on how that's gone in the years since I left.)

We also worked to make it easy for people to report bugs into the main bug tracker. The Bugzilla installation we had for most of the time that I was at Wikimedia had two bug reporting forms: a "simple" submission form that we pointed most people to, with far fewer fields, and an "advanced" form that Wikimedia-experienced developers used. They've moved to Phabricator now, and I don't know whether they've replicated that kind of two-lane approach.

A closed-source example: FogBugz. When I was at Fog Creek Software doing sales and customer support, we used FogBugz as our internal bug tracker (to manage TODOs for our products,* and as our customer relationship manager). Emails into the relevant email addresses landed in FogBugz, so it was easy for me to reply directly to help requests that I could fix myself, and easy for me to note "this customer support request demonstrates a bug we need to fix" and turn it into a bug report, or open a related issue for that bug report. If I recall correctly, I could even set the visibility of the issue so the customer could see it and its progress (unusual, since almost all our issue-tracking was private and visible only within the company).

Debian logo An interface example: Debian. Debian lets you report bugs via email and via the command-line reportbug program. As the "how to use BTS" guide says,

some spam messages managed to send mails to -done addresses. Those are usually easily caught, and given that everything can get reverted easily it's not that troublesome. The package maintainers usually notice those and react to them, as do the BTS admins regularly.

The BTS admins also have the possibility to block some senders from working on the bug tracking system in case they deliberately do malicious things.

But being open and inviting everyone to work on bugs totally outweighs the troubles that sometimes pop up because of misuse of the control bot.

And that leads us to:

General guidelines: Dreamwidth, Zulip, MediaWiki, and Debian don't discourage people from filing bug reports in the official central bug tracker. Even someone quite new to a particular codebase/project can file a very helpful and clear bug report, after all, as long as they know the general skill of filing a good bug report. Rather, I think the philosophy is what you might find in hospitable activism in general: meet people where they are, and provide a means for them to conveniently start the conversation in a time, place, and manner that's more comfortable for them. For a lot of people, that means email, or the product itself.

Failure modes can include:

Whether or not you choose to increase the number of interfaces you enable for bug reporting, it's worth improving the user experience for people reporting bugs into your main bug tracker. Tedious, lots-of-fields issue tracker templates and UIs decrease throughput, even for skilled bug reporters who simply aren't used to the particular codebase/project they're currently trying to file an issue about. So we should make that easier. You can provide an easy web form, as Wikimedia did via the simplified Bugzilla form, or an email or in-application route, as Debian does.

And FLOSS projects oughta do what the Accumulo folks did for Kuper, too, saying, "I can file that bug for you." We can be inclusive-or rather than exclusive-or about it, you know? That's how I figure it.


* Those products were CityDesk, Copilot, and FogBugz -- this was before Kiln, Stack Overflow, Trello, and Glitch.

Thanks to Lindsey Kuper and Jillian C. York for sparking this post, and thanks to azurelunatic for making sure I got Dreamwidth details right.

Filed under:


(1) : How to Teach And Include Volunteers who Write Poor Patches: You help run an open source software community, and you've successfully signalled that you're open to new contributors, including people who aren't professional software engineers. And you've already got an easy developer setup process and great test coverage so it's easy for new people to get up and running fast. Great!

Some of the volunteers who join you are less-skilled programmers, and they're submitting pull requests/patches that need a lot of review and reworking before you can merge them.

How do you improve these volunteers' work, help them do productive things for the project, and encourage and include them?

My suggestions for you fall into three categories: helping them improve their code, dealing with the poor-quality pull requests themselves, and redirecting their energies to improve the project in other ways.

Teaching them to improve their code

Dealing with poor patches themselves

Using their knowledge and curiosity to improve the project in other ways

This list is absolutely not the be-all and end-all for this topic; I'd like to know what approaches others use.


Thanks to Noah Swartz for starting a conversation at Maintainerati that spurred me to write this post.

Filed under:


: What Does An Award Do?: I posted on MetaFilter about the new Disobedience Award that MIT Media Lab is starting (nomination deadline: May 1st). And in the comments there, I stumbled into talking about why one might found an award, and thought it was worth expanding a bit here.

I think anyone who thinks for a second about awards -- assuming the judgment is carried out in good faith -- says, well, it's to reward excellence. Yup! But what are the particular ways an award rewards excellence, and when might an award be a useful tool to wield?

Let's say you are an organization and you genuinely want to celebrate and encourage some activity or principle, because you think it's important and there's not enough of it, particularly because there are so many norms and logistical disincentives pushing to reduce it. For example, you might want to encourage altruistic resistance. Let's say your organization already has a bunch of ongoing processes, like teaching or making products or processing information, and maybe you make some changes in those processes to increase how likely it is that you're encouraging altrustic resistance, but that isn't really apparent to the world outside your doors in the near term, and the effects take a while to percolate out.

So maybe you could set up an award. An award can:

And if an award keeps going and catches on, then people start using it as a shorthand for a goal. New practitioners can dream of winning the acclamation that a Pulitzer, a Nobel, a Presidential Medal of Freedom carries. If there's an award for a particular kind of excellence, and the community keeps records of who wins that award, then in hard moments, it can be easier for a practitioner to think of that roll call of heroes and say to herself in hard moments, "keep on going". We put people on pedestals not for them, but for us, so it's easier for us to see them and model ourselves after them.

So, all awards are simplistic summative judgments, but if the problem is that we need to balance the scales a bit, maybe it'll help anyway.

Nalo Hopkinson is doing it via the Lemonade Award for kindness in the speculative fiction community. The Tiptree Award does it for the expansion & exploration of gender. Open Source Bridge does it for community-making in open source with the Open Source Citizenship Award for "someone who has put in extra effort to share knowledge and make the open source world a better place."* It's worth considering: in your community, do people lack a way to find and celebrate a particular sort of excellence? You have a lot of tools you could wield, and awards are one of them.


* I realized today that I don't think the list of past Open Source Citizenship Award recipients is in one place anywhere! Each of these people was honored with a "Truly Outstanding Open Source Citizen" medal or plaque by the Open Source Bridge conference to celebrate our engagement "in the practice of an interlocking set of rights and responsibilities."

Filed under:


: New Zine "Playing With Python: Two of My Favorite Lenses":

half-scratched-out bpython logo, Python code, and technical prose written and drawn on paper, with notebook and pen, on a wooden table that also has a mug and a laptop on it

MergeSort, the feminist maker meetup I co-organize, had a table at Maker Faire earlier this month. Last year we'd given away (and taught people how to cut and fold) a few of my zines, and people enjoyed that. A week before Maker Faire this year, I was attempting to nap when I was struck with the conviction that I ought to make a Python zine to give out this year.

So I did! Below is Playing with Python: 2 of my favorite lenses. (As you can see from the photos of the drafting process, I thought about mentioning pdb, various cool libraries, and other great parts of the Python ecology, but narrowed my focus to bpython and python -i.)

Zine cover; transcription below

Playing with Python
2 of my favorite lenses
[magnifying glass and eyeglass icons]

by Sumana Harihareswara

Second and third pages of zine; transcription below

When I'm getting a Python program running for the 1st time, playing around & lightly sketching or prototyping to figure out what I want to do, I [heart]:
bpython & python -i

[illustrations: sketch of a house, outline of a house in dots]

Fourth and fifth pages of zine; transcription below

bpython is an exploratory Python interpreter. It shows what you can do with an object:

>>> dogs = ["Fido", "Toto"]
>>> dogs.
append count extend index insert pop remove reverse sort

And, you can use Control-R to undo!
bpython-interpreter.org

[illustrations: bpython logo, pointer to cursor after dogs.]

Sixth and seventh pages of zine; transcription below

Use the -i flag when running a script, and when it finishes or crashes, you'll get an interactive Python session so you can inspect the state of your program at that moment!

$ python -i example.py
Traceback (most recent call last):
    File "example.py", line 5, in 
        toprint = varname + "entries"
TypeError: unsupported operand type(s) for + : 'int' and 'str'
>>> varname
3
>>> type(varname)

[illustration: pointer to type(varname) asking, "wanna make a guess?"]

Back cover of zine; transcription below

More: "A Few Python Tips"
harihareswara.net/talks.html
This zine made in honor of
MergeSort
NYC's feminist makerspace!
http://mergesort.nyc

CC BY-SA 2016 Sumana Harihareswara
harihareswara.net @brainwane

Everyone has something to teach;
everyone has something to learn.

Laptop displaying bpython logo next to half-scratched-out bpython logo, Python code, and technical prose written and drawn on paper, with notebook and pen and mug, on a wooden table

Here's the directory that contains those thumbnails, plus a PDF to print out and turn into an eight-page booklet with one center cut and a bit of folding. That directory also contains a screenshot of the bpython logo with a grid overlaid, in case you ever want to hand-draw it. Hand-drawing the bpython logo was the hardest thing about making this zine (beating "fitting a sample error message into the width allotted" by a narrow margin).

Libby Horacek and Anne DeCusatis not only volunteered at the MergeSort table -- they also created zines right there and then! (Libby, Anne.) The software zine heritage of The Whole Earth Software Review, 2600, BubbleSort, Julia Evans, The Recompiler, et alia continues!

(I know about bpython and python -i because I learned about them at the Recurse Center. Want to become a better programmer? Join the Recurse Center!)

Filed under:


(1) : Rough Notes for New FLOSS Contributors On The Scientific Method and Usable History: Some thrown-together thoughts towards a more comprehensive writeup. It's advice on about how to get along better as a new open source participant, based on the fundamental wisdom that you weren't the first person here and you won't be the last.

We aren't just making code. We are working in a shared workplace, even if it's an online place rather than a physical office or laboratory, making stuff together. The work includes not just writing functions and classes, but experiments and planning and coming up with "we ought to do this" ideas. And we try to make it so that anyone coming into our shared workplace -- or anyone who's working on a different part of the project than they're already used to -- can take a look at what we've already said and done, and reuse the work that's been done already.

We aren't just making code. We're making history. And we're making a usable history, one that you can use, and one that the contributor next year can use.

So if you're contributing now, you have to learn to learn from history. We put a certain kind of work in our code repositories, both code and notes about the code. git grep idea searches a code repository's code and comments for the word "idea", git log --grep="idea" searches the commit history for times we've used the word "idea" in a commit message, and git blame codefile.py shows you who last changed every line of that codefile, and when. And we put a certain kind of work into our conversations, in our mailing lists and our bug/issue trackers. We say "I tried this and it didn't work" or "here's how someone else should implement this" or "I am currently working on this". You will, with practice, get better at finding and looking at these clues, at finding the bits of code and conversation that are relevant to your question.

And you have to learn to contribute to history. This is why we want you to ask your questions in public -- so that when we answer them, someone today or next week or next year can also learn from the answer. This is why we want you to write emails to our mailing lists where you explain what you're doing. This is why we ask you to use proper English when you write code comments, and why we have rules for the formatting and phrasing of commit messages, so it's easier for someone in the future to grep and skim and understand. This is why a good question or a good answer has enough context that other people, a year from now, can see whether it's relevant to them.

Relatedly: the scientific method is for teaching as well as for troubleshooting. I compared an open source project to a lab before. In the code work we do, we often use the scientific method. In order for someone else to help you, they have to create, test, and prove or disprove theories -- about what you already know, about what your code is doing, about the configuration on your computer. And when you see me asking a million questions, asking you to try something out, asking what you have already tried, and so on, that's what I'm doing. I'm generally using the scientific method. I'm coming up with a question and a hypothesis and I'm testing it, or asking you to test it, so we can look at that data together and draw conclusions and use them to find new interesting questions to pursue.

Example:

So I'll ask a question to try and prove or disprove my hypothesis. And if you never reply to my question, or you say "oh I fixed it" but don't say how, or if you say "no that's not the problem" but you don't share the evidence that led you to that conclusion, it's harder for me to help you. And similarly, if I'm trying to figure out what you already know so that I can help you solve a problem, I'm going to ask a lot of diagnostic questions about whether you know how to do this or that. And it's ok not to know things! I want to teach you. And then you'll teach someone else.

In our coding work, it's a shared responsibility to generate hypotheses and to investigate them, to put them to the test, and to share data publicly to help others with their investigations. And it's more fruitful to pursue hypotheses, to ask "I tried ___ and it's not working; could the reason be this?", than it is to merely ask "what's going on?" and push the responsibility of hypothesizing and investigation onto others.

This is a part of balancing self-sufficiency and interdependence. You must try, and then you must ask. Use the scientific method and come up with some hypotheses, then ask for help -- and ask for help in a way that helps contribute to our shared history, and is more likely to help ensure a return-on-investment for other people's time.

So it's likely to go like this:

  1. you try to solve your problem until you get stuck, including looking through our code and our documentation, then start formulating your request for help
  2. you ask your question
  3. someone directs you to a document
  4. you go read that document, and try to use it to answer your question
  5. you find you are confused about a new thing
  6. you ask another question
  7. now that you have demonstrated that you have the ability to read, think, and learn new things, someone has a longer talk with you to answer your new specific question
  8. you and the other person collaborate to improve the document that you read in step 4 :-)

This helps us make a balance between person-to-person discussion and documentation that everyone can read, so we save time answering common questions but also get everyone the personal help they need. This will help you understand the rhythm of help we provide in livechat -- including why we prefer to give you help in public mailing lists and channels, instead of in one-on-one private messages or email. We prefer to hear from you and respond to you in public places so more people have a chance to answer the question, and to see and benefit from the answer.

We want you to learn and grow. And your success is going to include a day when you see how we should be doing things better, not just with a new feature or a bugfix in the code, but in our processes, in how we're organizing and running the lab. I also deeply want for you to take the lessons you learn -- about how a group can organize itself to empower everyone, about seeing and hacking systems, about what scaffolding makes people more capable -- to the rest of your life, so you can be freer, stronger, a better leader, a disruptive influence in the oppressive and needless hierarchies you encounter. That's success too. You are part of our history and we are part of yours, even if you part ways with us, even if the project goes defunct.

This is where I should say something about not just making a diff but a difference, or something about the changelog of your life, but I am already super late to go on my morning jog and this was meant to be a quick-and-rough braindump anyway...

Filed under:


: Advice on Starting And Running A New Open Source Project: Recently, a couple of programmers asked me for advice on starting and running a new open source project. So, here are some thoughts, assuming you're already a programmer, you haven't led a team before, and you know your new software project is going to be open source.

I figure there are a few different kinds of best practices in starting and running open source projects.

General management: Some of my recommendations are the same kinds of best practices that are useful anytime you're starting/running/managing any kind of project, inside or outside the software world.

For instance: know why you're starting this thing. Write down even just a one-paragraph or 100-word bulleted list description of what you are aiming at. This will reduce the chance that you'll look up one day and see that your targeted little tool has turned into a mess that's trying to be an entire operating system.

And: if you're making something that you want other people to use, then check what those other people are already using/doing, so you can make sure you suit their needs. This guards against any potential perception that you are starting a new project thoughtlessly, or just for the heck of it, or to learn a new framework. In the software world, this includes taking note of your target users' dependencies (e.g., the versions of Python/NumPy that they already have installed).

Resources I have found useful here include William Ball's book on theatrical direction A Sense of Direction, Dale Carnegie's How to Win Friends and Influence People, Fisher & Ury's Getting To Yes, Cialdini's Influence: The Science of Persuasion, and Ries & Trout's Positioning: The Battle for Your Mind.

Tech management: Some best practices are the same kinds of habits that help in managing any kind of software project, including closed-source projects as well.

For instance: more automated tests in/for your codebase are better, because they reduce regressions so you can move faster and merge others' code faster (and let others review and merge code faster), but don't sweat getting to 100%, because there's definitely a decreasing marginal utility to this stuff. Travis CI is pretty easy to set up for the common case.

I assume you're using Git. Especially if you're going to be the maintainer on a code level, learn to use Git beyond just push and pull. Clone a repo of a project you don't care about and try the more advanced commands as you make little changes to the code, so if you ruin everything you haven't actually set your own work back. Learn to branch and merge and work with remotes and cherry-pick and bisect. Read this super useful explanation of the Git model which articulates what's actually doing what -- it helps.

Good resources here include Brooks's The Mythical Man-Month, DeMarco & Lister's Peopleware, Heidi Waterhouse's "The Seven Righteous Fights", Camille Fournier's blog, and my own talk "Learn Tech Management in 45 Minutes" and my article "Software in Person". I myself earned a master's in technology management and if you are super serious about becoming a technology executive then that's a path I can give more specific thoughts on, but I'm not about to recommend that amount of coursework to someone who isn't looking to make a career out of this.

Open source management: And some best practices are the specific social, product management, architectural, and infrastructural best practices of open source projects. A few examples:

If you're the maintainer, it's key to reply to new project-related emails, queries, bug reports, and patches fast; a Mozilla analysis backs up our experience that a kind, fast, negative response is better than a long silent delay. Reply to people fast, even if it's just "I saw this, thank you, I'm busy, will get to this in a few weeks," because otherwise the uncertainty is deathly and people's enthusiasm and momentum drip away.

Make announcements somewhere public and easily findable that say something about the current state of your project, e.g., about whether it's ready to use or when to expect it to be. This could even just be someplace prominent in your README when you're just getting started. This is also a good place to mention if you're going to be at any upcoming conferences, so people can connect to you that way.

Especially when it comes to code, docs, and bug/feature/task lists, work in the open from as early as possible, preferably from the start. Treat private work as a special case (sometimes a useful one when it comes to communication with users and with new contributors, as a tidepool incubates growth that can then flow into the ocean).

I am sad, as a FLOSS zealot, to say that you should probably be on the closed-source platform that is GitHub. But yeah, the intake funnel for code and bug contributors is easier on GitHub than on any other platform; unless you are pretty sure you already know who all the people are who will use and improve this software, and they're all happy on GitLab or similar, GitHub is going to get you more and faster contributors.

You are adjacent to or embedded in other programming communities, like the programming language & frameworks you're using. Use the OSI-approved license that the projects you're adjacent to/depending on use, to make reuse easier.

It's never too early to think about governance. As Christie Koehler of Authentic Engine warns, to think about codes of conduct, you also gotta think about governance. (The Contributor Covenant is a popular starting point.) If you can be under the umbrella of a software-related nonprofit, like NumFOCUS, that'll help you make and implement these choices.

Top reading recommendation: Karl Fogel's Producing OSS is basically the bible for this category, and the online version is up-to-date with new advice from this year. If you read Producing OSS cover-to-cover you will be entirely set to start and run your project.

Additionally: Fogel also co-wrote criteria for assessing whether a project "is created and managed in a sustainably open source way". And I recommend my own blog post "How To Improve Bus Factor In Your Open Source Project", the Linux Foundation CII criteria (hat-tip to Benjamin Gilbert), "build your own rockstars" by one of the founders of the Dreamwidth project, and "dreamwidth as vindication of a few cherished theories" by that same founder (especially the section starting "our development environment and how we managed to create a process and culture that's so welcoming").

Obligatory plug: I started Changeset Consulting, which provides targeted project management and release management services for open source projects and the orgs that depend on them. In many ways I am maintainer-as-a-service. If you want to talk more about this work, please reach out!

Filed under:


: Tips To Increase Your Conference Talk Acceptance Rate: This year I submitted talks to several tech conferences and got a higher acceptance rate than I had been used to. For instance, this year I will speak for the first time at OSCON and PyCon North America, conferences that had previously rejected my proposals.

Why did this happen? I am a more polished and experienced speaker than I was in previous years, yes; program committees can see more videos and read more transcripts of my past talks. I have a better résumé and more personal connections. And through practice, I've gotten past the "get ideas on the page" stage of writing conference proposals, and learned how to better suggest a useful talk relevant to the audience.

Those factors you can't replicate today. But some, you can. Here they are. I believe following these tips will increase the acceptance rate for your software conference talk submissions.

Learn what they need.

Check whether you already know someone involved in selecting talks for the conference or a sub-track. If you do, ask them if they have any topic gaps you could help fill. Maybe they've gotten no talks yet on, say, Python web frameworks other than Django; they might specifically encourage you. And even if you don't know the con-runners personally, check their social media presences, in case they've spoken there and specifically asked for more talks about certain topics, or more talks from people from underrepresented perspectives or groups (example).

If this conference has happened before, look at the previous year's schedule; look for the range of the possible. What is this conference's universe of discourse? This helps you match the audience's interests and helps you avoid duplicating a conference talk from last year. What topics are adjacent to the topics they already seem to have covered?

Speaking of duplication, here's my take on "repeating yourself":

Tell them if you gave the talk already.

If you've given the talk before, say so, and link to the text, transcript and video/audio. Having all this info ready in your comprehensive "past talks" page is handy (see below). They can try before they buy! If you gave the talk to a standing-room only crowd and great acclaim, say so! And even if that was not the case, if you've already given this talk at another conference, now they know that it was good enough for someone else already, which is a useful social signal. I delivered "HTTP Can Do That?!" at Open Source Bridge last year, and in 2016 PyCon North America and Great Wide Open accepted my proposal to give nearly the same talk.

Different people and subcommunities follow different norms or rules of etiquette in distinguishing rude repetitiveness from sensible re-use. You might be able to lead the same multi-hour skill-building workshop at one convention multiple years in a row. I don't think I'll be re-proposing "HTTP Can Do That?!" in 2017 at any conferences; a 2-year exposure window feels all right to me.

More on reuse:

Reuse your blog post that went viral.

Basing the talk on a blog post or article of yours that got a lot of responses is great. Point to it and to the long comment threads, response blog posts, etc. This demonstrates that the community is interested in your thoughts on this topic, and that you've already thought about it a lot. I have successfully done this by turning my "Inessential Weirdnesses in Open Source" blog post from 2014 into a talk LibrePlanet and OSCON accepted this year.

I know, I said I'd only give you advice you can replicate today. I have no secrets to share about the internet fame lottery. But you can think aloud in a blog post or a Model View Culture essay or LWN article, and get feedback that helps you sharpen your message. Or, if you found someone else's article super provocative, you can ask them to pair up with you and co-submit.

And in general:

Save what you write.

Look at the talk submission form and figure out what they'll want, but write your submission and save it somewhere else. This makes it easier to reuse your proposal for multiple conferences, and to have multiple proposals to submit to any one conference.

Indeed:

Submit two or three proposals to each conference.

If I want to speak at a conference, I usually submit multiple proposals on different topics. I have never heard of a conference that limits speakers to one submission per speaker. The first time it occurred to me to do this: 2009, when I saw that the Open Source Bridge submission system lets everyone see everyone else's submission. I saw that some men were submitting three, four, even five proposals. Oh, you can do that! I submitted three talks, and one got in.

For each of those proposals:

Specify the audience and the takeaways.

Some conference submission forms explicitly ask who the audience is for your talk, or what prerequisites you think they should have. Also, some ask for the objectives of your talk, or what the audience will take away. This is a great question. Give specific answers. Even if the conference doesn't ask for these things, I suggest you include them anyway -- put them in the abstract, additional info field, detailed description, or similar.

Two examples, from talks I successfully submitted. Example 1:

Audience: Developers with at least enough web development experience that they've used GET and POST, but who are unfamiliar with using DELETE or conditional GET

Objectives: Attendees will learn about HTTP 1.1 verbs, headers, response codes, and capabilities they did not know of before, with use cases, example code, and jokes. They will walk away feeling more capable of using more of the HTTP featureset and with a greater understanding of the underlying design of the protocol.

Example 2:

Takeaway: Open source contributors and leaders who are already comfortable with our norms and jargon will learn how to see their own phrasings and tools as outsiders do, and to make more hospitable experiences during their outreach efforts.

Prereqs: Attendees need to already be able to use core open source tools (like version control, IRC, mailing lists, bug trackers, and wikis), and be familiar with general open source culture and trends such as "+1", scorn towards Microsoft Windows, and the argument between copyleft and permissive licenses.

And in each proposal:

Provide a detailed outline.

Be concise in the short-form description, and then write out a big outline for the "more details" or longer abstract. For 45-minute talks I've provided detailed outlines/abstracts of 300-1200 words. The committee can easily infer that you have already thought a lot about this topic and are pretty prepped, and this that there's less risk of you giving a half-baked or rambling talk.

Finally, three more logistical tips:

Get the proposals in on time.

You know yourself. You have probably figured out how to avoid missing appointments and deadlines. Do that. Set up phone alarms, calendar reminders, a boomerang'd email, a Twitter feed subscription, a promise to an accountability buddy, what have you.

Publish a "My past talks" page.

Here's mine. List your past talks, including workshops and classes you've led and podcasts or interviews that feature you. Where possible, link to or embed sample video and audio so a conference can try before they buy. If you can, name the conference, place, and year for each one; in some cases it might make more sense to say "led weekly brownbag talk series, Name of Employer, 2010-2012" or similar. A longer public speaking résumé means you're less of a risk.

Customize your bio.

A good biographical paragraph establishes your credibility and likelihood to speak about the subject and say interesting things. For instance, I've taught interactive workshops before, and I've won an Open Source Citizen Award. Sometimes I put this kind of thing in my bio, sometimes in an "additional info" field. This is one place that a comprehensive "My past talks" page comes in handy. I can cut and paste stuff from that page into a customized bio that demonstrates why I'm a great choice to give this talk.

That's what I know so far.

I add my voice here to the multitude of resources on this topic, and especially commend to you Lena Reinhard's excellent and comprehensive talk prep guide and the weekly Technically Speaking email newsletter.

Thanks to Christie Koehler for the conversation that sparked this post!

Filed under:


: What Is Maintainership?: Yesterday, at my first LibrePlanet conference, I delivered a somewhat impromptu five-minute lightning talk, "What is maintainership? Or, approaches to filling management skill gaps in free software". I spoke without a script, and what follows is what I meant to say; I hope it bears a strong resemblance to what I actually said. I do not know whether any video of this session will appear online; if it does, I'll update this entry.

What is Maintainership?

Or, approaches to filling management skill gaps in free software

Sumana Harihareswara, Changeset Consulting
LibrePlanet, Cambridge, MA, 20 March 2016

Why do we have maintainers in free software projects? There are various different explanations you can use, and they affect how you do the job of maintainer, how you treat maintainers, how and whether you recruit and mentor them, and so on.

So here are three -- they aren't the only ways people think about maintainership, but these are three I have noticed, and I have given them alliterative names to make it easier to think about and remember them.

Sad: This is a narrative where even having maintainers is, fundamentally, an admission of failure. Jefferson said a lot of BS, but one thing he said that wasn't was: "If men were angels, we would have no need of government." And if every contributor contributed equally to bug triage, release management, communication, and so on, then we wouldn't need to delegate that responsibility to someone, to a maintainer. But it's not like that, so we do. It's an approach to preventing the Tragedy of the Commons.

I am not saying that this approach is wrong. It's totally legitimate if this is how you are thinking about maintainership. But it's going to affect how your community does it, so, just be aware.

Skill: This approach says, well, people want to grow their skills. This is really natural. People want to get better, they want to achieve mastery, and they want validation of their mastery, they want other people to respect their mastery. And the skill of being a maintainer, it's a skill, or a set of skills, around release management, communication, writing, leadership, and so on. And if it's a skill, then you can learn it. We can mentor new maintainers, teach them the skills they need.

So in this approach, people might have ambition to be maintainers. And ambition is not a dirty word. As Dr. Anna Fels puts it in her book Necessary Dreams, ambition is the combination of the urge to achieve mastery of some domain and the desire to have your peers, or people you admire, acknowledge, recognize, validate your mastery.

With this skills approach, we say, yeah, it's natural that some people have ambition to get better as developers and also to get better at the skills involved in being a maintainer, and we create pathways for that.

Sustain: OK, now we're talking about the economics of free software, how it gets sustained. If we're talking about economics, then we're talking about suppply and demand. And I believe that, in free software right now, there is an oversupply of developers who want to write feature code, relative to an undersupply of people with the temperament and skills and desire to do everything else that needs doing, to get free software polished and usable and delivered and making a difference. This is because of a lot of factors, who we've kept out and who got drawn into the community over the years, but anyway, it means we don't have enough people who currently have the skill and interest and time to do tasks that maintainers do.

But we have all these companies, right? Companies that depend on, that are built on free software infrastructure. How can those with more money than time help solve this problem?

[insert Changeset Consulting plug here]. You can hire my firm, Changeset Consulting, to do these tasks for a free software project you care about. Changeset Consulting can do bug triage, doc rewriting, user experience research, contributor outreach, release management, customer service, and basically all the tasks involved in maintainership except for the writing and reviewing of feature code, which is what those core developers want to be doing anyway. It's maintainer-as-a-service.

Of course you don't have to hire me. But it is worth thinking about what needs to be done, and disaggregating it and seeing what bits companies can pay for, to help sustain the free software ecology they depend on.

So: sad, skill, sustain. I hope thinking about what approach you are taking helps your project think about maintainership, and what it needs to do to make the biggest long-term impact on software freedom. Thank you.

Filed under:


: What Should We Stop Doing? (FLOSS Community Metrics Meeting keynote): "What should we stop doing?": written version of a keynote address by Sumana Harihareswara, delivered at the FLOSS Community Metrics Meeting just before FOSDEM, 29 January 2016 in Brussels, Belgium. Slide deck is a 14-page PDF. Video is available. The notes I used when I delivered the talk were quite skeletal, so the talk I delivered varied substantially on the sentence level, but covered all the same points.

Photo of me at FLOSS Metrics meeting, public domain by ben van't ende, https://photos.google.com/share/AF1QipMGh90Jfl8uVKH3U-e4CGF93i-vbHvhjbWVOvkn3ZlOeBAoc5PX_n_augA9v-cvPQ/photo/AF1QipNmQJdbqw2TrhcX6HuooqUuQmLfFbRkn73QW_Aq?key=NXFqUUVqdlN6MWdQSFdSNEFBSVFKajRpQVVQNnpnI'd like to start with a story, about my excellent boss I worked for when I was at the Wikimedia Foundation, Rob Lanphier, and what he told me when I'd been on the job about eight months. In one of our one-on-one meetings, I mentioned to him that I felt overwhelmed. And first, he told me that I'd been on the job less than a year, and it takes a year to ramp up fully in that job, so I shouldn't be too worried. And then he reminded me that we were in an amazing position, that we would hear and get all kinds of great ideas, but that in order to get anything done, we would have to focus. We'd have to learn to say, "That's a great idea, and we're not doing it." And say it often. And, he reminded me, I felt overwhelmed because I actually had the power to make choices, about what I did with my time, that would affect a lot of people. I was not just cog # 15,000 doing a super specialized task at Apple.

So today I want to talk with you about how to use the power you have, in your open source projects and organizations, and about saying no to a lot of things, so you can focus on doing fewer things well -- the Unix philosophy, right? I'll talk about a few tools and leave you with some questions.

Tool 1: Remember to say no to the lamppost fallacy

The lamppost fallacy is an old one, and the story goes that a drunk guy says, "I dropped my keys, will you help me look for them?" "OK, sure. Where'd you drop them?" "Under that tree." "So why are you looking for them under this lamppost?" "Well, the light is better here."

A. Quantitative vs qualitative in the dev data

The first place we ought to check for the lamppost fallacy is in overvaluing quantitative metrics over qualitative analysis when looking at developer workflow and experience. Dave Neary said, in the FLOSSMetrics meeting in 2014, in "What you measure is what you get. Stories of metrics gone wrong": Use qualitative and quantitative analysis to interpret metrics.

When it comes to developer experience, you can be analytical while both quantitative and qualitative. And you rather have to be, because as soon as you start uncovering numbers, you start asking why they are what they are and what could be done to change that, and that's where the qualitative analytical approach comes in.

Qualitative is still analytical! Camille Fournier's post, "Qualitative or quantitative but always analytical", goes into this:

qualitative is still analytical. You may not be able to use data-driven reasoning because you're starting something new, and there are no numbers. It is hard to do quantitative analysis without data, and new things only have secondary data about potential and markets, they do not have primary data about the actual user engagement with the unbuilt product that you can measure. Furthermore, even when the thing is released, you probably have nothing but "small" data for a while. If you only have a thousand people engaging with something, it is hard to do interesting and statistically significant A/B tests unless you change things drastically and cause massive behavioral changes.

This is applicable to developer experience as well!

For help, I recommend the Wikimedia movement's Grants Evaluation & Learning team's table discussing quantitative and qualitative approaches you can take: ethnography, case studies, participant observation, and so on. To deepen understanding. It's complementary with the quantitative side, which is about generalizing findings.

B. Quantifiable dev artifacts-and-process data versus data about everything else

Another place to check for the lamppost fallacy is in overvaluing quantifiable data about programming artifacts and process over all sorts of data about everything else that matters about your project. Earlier today, Jesus González-Barahona mentioned the many communities -- dev, contributor, user, larger ecosystem -- that you might want to research. There's lots of easily quantifiable data about development, yes, but what is actually important to your project? Dev, user, sysadmin, larger ecology -- all of these might be, honestly, more important to the success of your mission. And we also know some things about how to get better at getting user data.

For help, I recommend the Simply Secure guides on doing qualitative UX research, such as seeing how users are using your product/application. And I recommend you read existing research on software engineering, like the findings in Making Software: What Really Works and Why We Believe It, the O'Reilly book edited by Andy Oram and Greg Wilson.

Tool 2: know what kind of assessment you're trying to do and how it plays into your theory of change

Another really important tool that will help you say no to some things and yes to others is knowing what kind of assessment you're trying to make, and how that plays into your hypothesis, your theory of change.

I'm going to mess this up compared to a serious education researcher, but it's worth knowing the basics of the difference between formative and summative assessments.

Formative assessment or evaluation is diagnostic, and you should use it iteratively to make better decisions to help students learn with better instruction & processes.

Summative assessment is checking outcomes at the conclusion of an exercise or a course, often for accountability, and judging the worth/value of that educational intervention. In our context as open source community managers, this often means that this data is used to persuade bosses & community that we're doing a good job or that someone else is doing a bad job.

As Dawn Foster last year said in her "Your Metrics Strategy" speech at the FLOSSMetrics meeting:

METRICS ARE USEFUL Measure progress, spot trends and recognize contributors.
Start with goals: WHY FOCUS ON GOALS? Avoid a mess: measure the right things, encourage good behavior.

Here's Ioana Chiorean, FLOSS Community Metrics meeting, January 30th 2015, "How metrics motivate":

Measure the right things... specific goals that will contribute to your organization's success

Dave Neary in 2014 in "What you measure is what you get. Stories of metrics gone wrong" at the Metrics meeting said:

be careful what you measure: metrics create incentives
Focus on business and community's success measurements

And this is tough. Because it can be hard to really make a space for truly formative assessment, especially if you are doing everything transparently, because as soon as you gather and publish any data, people will use it to argue that we ought to make drastic changes, not just iterative changes. But it might help to remember what you are truly aiming at, what kind of evaluation you really mean to be doing.

And it helps a lot to know your Theory of Change. You have an assessment of the way the world is, a vision of how you want the world to look, and a hypothesis about some change you could make, an activity or intervention you could perform to move us closer from A to B.

There's a chicken and egg problem here. How do you form the hypothesis without doing some initial measurement? And my perhaps subversive answer is, use ideas from other communities and research to create a hypothesis, and then set up some experiments to check it. Or go with your gut, your instinct about what the hypothesis is, and be ready to discard it if the data does not bear it out.

For help: Check out educational psychology, such as cognitive apprenticeship theory - Mel Chua's presentation here gives you the basics. You might also check out the Program/Grant Learning & Evaluation findings from Wikimedia, and try out how the "pirate metrics" funnel -- Acquisition, Activation, Retention, Referral, Revenue, or AARRR -- fits with your community's needs and bottlenecks.

Tool 3: if something doesn't work, acknowledge it

And the third tool is that when we see data saying that something does not work, we need to have the courage to acknowledge what the data is saying. You can move the goalposts, or you can say no and cause some temporary pain. We have to be willing to take bug reports.

Here's an example. The Wikimedia movement likes to host editathons, where a bunch of people get together and learn to edit Wikipedia together. We hoped that would be a way to train and retain new editors. But Wikipedia editathons don't produce new long-term editors. We learned:

About 52% of participants identified as new users made at least one edit one month after their event, but the percentage editing dropped to 15% in the sixth months after their event

And, in "What we learned from the English Wikipedia new editor pilot in the Philippines":

Inviting contribution by surfacing geo-targeted article stubs was not enough to motivate or help users to make their first edits to an article. Together, all new editors who joined made only six edits in total to the article space during this experiment, and they made no edits to the articles we suggested.

Providing suggestions via links to places users might go for help did not appear to sufficiently support or motivate these new editors to get involved. 50 percent of those surveyed later said they didn’t look for help pages. Those who did view help pages nevertheless did not edit the suggested articles.

But over and over in the Wikimedia movement I see that we keep hosting those one-off editathons. And they do work to, for instance, add new high-quality content about the topics they focus on, and some people really like them as parties and morale boosters, and I've heard the argument that they at least get a lot of people through that first step, of creating an account and making their first edit. But that does not mean that they're things we should be spending time on, to reverse the editor decline trend. We need to be honest about that.

It can be hard to give up things we like doing, things we think are good ideas and that ought to work. As an example: I am very much in favor of mentorship and apprenticeship programs in open source, like Google Summer of Code and Outreachy. Recently some researchers, Adriaan Labuschagne and Reid Holmes, raised questions about mentorship programs in "Do Onboarding Programs Work?", published in 2015, about whether these kinds of mentorship programs move the needle enough in the long run, to bring new contributors in. It's not conclusive, but there are questions. And I need to pay attention to that kind of research and be willing to change my recommendations based on what actually works.

We can run into cognitive dissonance if we realize that we did something that wasn't actually effective. Why did I do this thing? why did we do this thing? There's an urge to rationalize it. The Wikimedia FailFest & Learning Pattern hackathon 2015 recommends that we try framing our stories about our past mistakes to avoid that temptation.

Big 'F' failure framing:
  1. We planned this thing: __________________________
  2. This is how we knew it wasn't working: __________________________
  3. There might have been some issues with our assumption that: __________________________
  4. If we tried it again, we might change: __________________________

Little 'f' failure framing:

  1. We planned this thing: __________________________
  2. This is how we knew it wasn't working: __________________________
  3. We think that this went wrong: __________________________
  4. Here is how to fix it: __________________________

For help with this tool, I suggest reading existing research evaluating what works in FLOSS and open culture, like "Measuring Engagement: Recommendations from Audit and Analytics" by David Eaves, Adam Lofting, Pierros Papadeas, Peter Loewen of Mozilla.

Priorities

I have a much larger question to leave you with.

One trend I see underlying a big chunk of FLOSS metrics work is the desire to automate the emotional labor involved in maintainership, like figuring out how our fellow contributors are doing, making choices about where to spend mentorship time, and tracking a community's emotional tenor. But is that appropriate? What if we switched our assumptions around and used our metrics to figure out what we're spending time on more generally, and tried to find low-value programming work we could stop doing? What tools would support this, and what scenarios could play out?

This is a huge question and I have barely scratched the surface, but I would love to hear your thoughts. Thank you.

Sumana Harihareswara, Changeset Consulting

Filed under:


(1) : Tips for New Summer Interns: Three tips to help new Google Summer of Code applicants and interns, some of which all remote workers could stand to remember:

  1. Never let yourself get stuck on a technical question or problem for more than half an hour. Take a break, ask questions in IRC or a mailing list, find a technical book to read like The Architecture of Open Source Applications, look at some other codebase to see how they do it, eat a meal, or do something else, then come back to the problem.
  2. Never let yourself get stuck waiting for someone's reply for more than 2 business days (Monday through Friday). Escalate -- ask your mentor. If your mentor isn't helping, ask your org admin. If the org admin isn't helping, ask on the GSoC discussion forum, or email Carol Smith.
  3. Ask yourself at the start of every day: what did I accomplish yesterday? What will I try to do today? What are the obstacles I think I will run into? If you ask yourself those three questions and answer honestly -- especially if you let your mentor and team know the answers -- then you will prevent long delays and help keep your morale up.
Filed under:



[Main]

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available by emailing the author at sumanah@panix.com.