Advice to open stuff newbies, maintainers, and other contributors.
# 12 Jun 2019, 10:43AM: Hey, You Left Something Out:
Of course not all the responses I get to my work are positive. Sometimes I get criticism. And a subset of that criticism says more about the person giving it than about the quality of what I've made. I try to keep a thick skin about that but I don't always succeed.
One particular kind of response has piqued my interest lately. Some of the feedback I get means to be praise, but contains a kinda-joking complaint about something that the person thinks I left out. I saw this recently in a recommendation of my PyCon 2016 talk, "HTTP Can Do That?!", and in another commenter's response. And some commentary of the "they/you left x out" variety is straightforward criticism.
At its most loving, I think this kind of commentary means to be a kind of "yes-and" response, sharing the experience of enjoying something and extending it by recommending another related thing. (I have been working on this blog post, on and off, for a few months; the day I am posting it, I see a perfect example.) And I can empathize with that!
But, a lot of the time, this kind of response comes with an explicit marker or implicit connotation of complaint: the author/speaker did not mention the thing that I think should be mentioned, and therefore, something is wrong.* Perhaps a more useful approach would be to wonder, in a genuinely curious way, why the author didn't mention it? Was it out of ignorance? Was it a deliberate choice, and, if so, to what purpose?
Marco Rogers's recently observed: "A lot of men seem to have been conditioned to think that telling someone that you disagree is the same as asking them a question. Like the way they learn to engage is by *creating a conflict*." Maybe that plays into this.
And as Josh Millard notes,
There's a lot of this sort of detached entitlement out there.... "I want content generated to my tastes" collides with "I'm making something with my bare hands" in such a way that the folks in the more passive former camp feel somehow totally comfortable asserting the high ground on the people in the latter.
Personal taste is personal taste and everybody's got a right to it; criticism is useful, at least when it's useful. Beyond that, though, there's a lot of Why Am I Not Being Correctly Entertained out there in the world that manages to get off the leash for no good reason, and from the doing-the-work, learning-the-craft, making-the-content side of things that does get awful tiring.
And maybe that plays into this too.
Compilation-makers, list-makers, etc. run into this kind of criticism frequently, as fanvidders discussed in a Vividcon panel about multisource vids. Perhaps some readers read any list of things sharing particular characteristic as an attempt to make the one canonical list, and thus read any publicly shared list as implicitly inviting corrections and additions toward this goal.
Last year bironic commented wryly,
I love how many multifandom vids lately come with explainers about scope, as we brace for people to come in and yell about someone who was included or left out.
And I appreciate vidder thingswithwings's response:
...there are so many selection choices to make, and only so many seconds of song . . . I think it's good to make it clear that we're making these decisions thoughtfully...
That's the spirit I see in thingswithwings's vidder's notes on their joyous, spirited and dancey vid "Gettin' Bi" and eruthros's vidder's notes on her excellent, moving, incisive vid "Straightening Up the House". And that's the spirit I'd like to inhabit as I make and share recommendation lists, compilations, etc. going forward.
And in that spirit I'll address here the praise-complaints of my own work that I linked to in my second paragraph. I scoped "HTTP Can Do That?!" to discuss underappreciated real, working parts of HTTP and share examples of things that work, even if they're bad ideas, as illustrations. I didn't show the cover of Bradbury's Fahrenheit 451 in my talk because -- as I mentioned during the Q&A -- I think it's fine to leave that particular connection as a bit of an Easter egg so some people have something to figure out when they look up response status code 451 later. I didn't include the teapot response code (418) because it's already fairly popular and well-known as a joke response code, and I wanted to spend my time on stuff folks weren't as likely to run across in other fora, and because it's a joke that isn't in the HTTP standards. I made a tradeoff between concision and nuance. Similarly, I didn't use the word "neoliberal" in that post about feelings of overwhelmption because that wasn't the point.
People who want to compliment work should probably learn to give compliments that sound encouraging. As one writer notes: "I think Twitter, for all its good qualities, can very much be a Killer Of Work exactly because people don't know how to say "that's so awesome!" or lift creators up in the idea stage." And people who genuinely want to submit you-left-something-out bug reports about someone else's work** should probably spend a few moments checking the maker's stated criteria and purpose, and reflecting on whether they perhaps had an interesting reason for the exclusion or omission, or on how much the gut biomes of the creator's intended audience matches the reader's gut biome. "I'm curious about the choice you made" may sound passive-aggressive, but I'd rather hear that than something that's just flat-out aggressive.
(Oh, and to be tiresomely empowering again: a human created the thing you're responding to; you're a human and you could make a thing, too.)
* "You forgot Poland" always comes to mind, even though a face-to-face debate is such an unusual context compared to the ways I usually get feedback like "you forgot x".
** even something tiny like a single joke
Thanks to Mindy Preston and others who commented on drafts of this idea & piece.
# 17 Apr 2019, 09:13AM: Recurse Center, What Really Works And How We Know:
I participated in Recurse Center (formerly Hacker School) in 2013 and in 2014, and emerged a better programmer, a calmer and kinder person, and a more confident learner. Gender diversity was part of the quality of that experience:
When part of the joy of a place is that gender doesn't matter, it's hard to write about that joy, because calling attention to gender is the opposite of that....
But, as Nick Bergson-Shilcock says in "What we've learned from seven years of working to make RC 50% women, trans, and non-binary", "We focus on diversity so Recursers can focus on programming.":
In April of 2012, we announced our goal to make RC 50% women. Seven years later, we are close to reaching an improved version of this goal: 48% of new Recursers in 2019 so far identify as women, trans, or non-binary. This post is a summary of what we’ve tried, learned, and accomplished over the past seven years, as well as our overall strategy and why we choose to prioritize this work.
Bergson-Shilcock's case study shares stats, what didn't work, and what they don't know yet -- the people who run RC are consistently like this, and this writeup exemplifies their judgment, integrity, and foresight. Even when I've disagreed with RC's faculty, I have always come away from the disagreement with my trust in them intact or increased. How many institutions could I describe in that way? Not many.
One last thing -- I've recently been trying to avoid saying "community" when I really mean group, set, school, industry, project, or workplace, and Bergson-Shilcock's articulation is gonna help me do that and to value substantive communities:
Having a genuine community requires that people know the other people around them, and that everyone shares some fundamental values and purpose.
# 27 Feb 2019, 09:01AM: GSoC/Outreachy Mentoring Orgs: Consider Giving Applicants English Tutoring:
Google Summer of Code just announced the 207 mentoring organizations (open source projects seeking participants) for this year's round, and Outreachy's 9 mentoring orgs also announced open internship projects.
This blog post is directed at org admins and mentors for those projects.
Many of your applicants are not fluent English writers. You have probably already experienced this, but stats back me up: Last year, GSoC had 5,199 applicants from 101 countries, many of which are not countries where English is a major medium of instruction. And nearly all the schools in the top ten were engineering schools in India, and Indian engineering schools do not teach students how to write in English at what the open source world considers a professional level. That lack of communication skills hurts your applicants as engineers, and as potential open source contributors in the long run.
I was an org admin for several years and saw, over and over, how many of our applicants had a hard time getting help and getting their ideas across because of poor writing skills. Mentors reviewed code and helped them become better coders, but weren't giving the same kind of systematic feedback about emails, bug reports, and so on, so applicants' writing skills stagnated.
In 2017, to address this, I ran English tutoring sessions for Zulip contributors. You can do this too.
Here's the call for volunteer tutors I used. Note that I explained my request in terms of global diversity and inclusion, reassured them that I'd set them up in the chatroom and be available to backchannel with them, and said "It's fine if you've never done this before and it's fine if you're not a programmer and don't know programming jargon." I circulated this request in scifi fandom, in particular in the fanfic community, which has tens of thousands of people who enjoy volunteering to proofread each other's written work and chatting online. A big source of volunteers was the Radio Free Monday weekly fandom newsletter (6 March 2017). I got 30 volunteers and was able to schedule 15 of them to tutor, and several of those volunteers were willing to do multiple 90-minute sessions.
Here's the announcement email I sent to our GSoC applicant mailing list.
We ran the tutoring sessions in the "learning" channel of our Zulip chat so it was easy to paste in links, explain proper formatting, and put side conversations in another thread. Here's the Dropbox Paper shared signup sheet where I kept the schedule and instructions for learners and tutors (basically: learners show up with a short written sample and with some thoughts about how they want to improve, and tutors take 30 minutes to critique each sample). The signup sheet format was, for example:
Date & time: Sunday, March 19th, 1:00-2:30 PM ET (10:30 PM-12 AM in India)
If only one person signed up for a session, that person got help for 45-60 minutes. Or, sometimes, we got drop-ins as other contributors got curious and realized they could ask for help on their blog posts or GSoC applications as well. After I got each tutor settled in I didn't have to pay attention for the whole 90 minutes, so I could do other Zulip work and check in occasionally -- and eventually other Zulip contributors helped out by "cohosting" so sessions could happen without me.
We ran about 20 sessions, and about 40 contributors got tutoring. They wrote better internship applications, blog posts, bug reports, code comments, pull requests, and mailing list posts because of what they learned in these sessions -- and they were so grateful for even 30 minutes of in-depth advice, because some of them had never gotten friendly, personal critique of their written English from a fluent speaker before.
So please copy me! And if several people tell me their projects are doing this, I'll help publicize your efforts together. There are a lot of fluent English writers with free time and an internet connection who would love to help the open source community in this way. Like Wikipedia, we can turn "Someone is WRONG on the Internet" into a good thing. :-)
# 07 Feb 2019, 09:17AM: Socratic Questioning, Devil's Advocacy, and Conversational Power Tools:
"Devil's advocate" was a job. In order for someone to perform the role of Devil's advocate, someone else had to appoint them to that position. And the Devil's advocate performed a bounded task within an established relationship with his debate opponent, towards the shared goal of a particular decision (whether to canonize someone).
Socratic questioning is a technique that a teacher uses with a student when both of them have agreed to that relationship. It includes a commitment by the teacher to the student's intellectual growth, and a variety of techniques in reflective listening.
I hang out in a lot of communities and with a lot of friends who care a lot about seeking truth and avoiding delusion. That's an admirable thing to want.
But in acting out these values, sometimes we misuse cool-looking tools, like Socratic questioning or the Devil's advocate position, by using them when we don't yet have a trusting relationship or (in particular with the "Devil's advocate" approach) a defined question and decision framework. For instance, if you consistently say things you don't mean in arguments, the people you are arguing with will come to trust you less. My friendships, work relationships, and hobby communities usually sit in the "caring" or "collaborative" part of the caring-to-combative spectrum;* if someone starts a competitive or even combative conversational game without first taking care to establish a magic circle, that breaks trust.
In conversation, when I find that I don't agree with someone else, I assume that our shared goal is to reach a mutual understanding. Perhaps one of us will persuade the other, or maybe we'll just understand why we disagree. But I'm open to revising that assumption in response to certain signals. When the person I'm talking with starts demanding that I stop to create and defend formal definitions for any word or phrase that I use, distributing the work of creating a shared understanding unequally, or cross-examining me without putting up their own point of view for examination, there's a level of disingenuousness there that I object to (the flip side of which a 2017 XKCD illustrates):
And the phrase "I'm just playing devil's advocate" in an online discussion, when the poster has not already asked others whether that's desired, is one of a suite of linguistic markers that make seasoned readers shake their heads. Because, as Alexandra Erin points out, "The phrase has basically morphed into Internet Argument Guy for 'I can argue with you but you can't argue with me.'"
If you want to "play devil's advocate" with me, or Socratically question something I've said, ask first, and mean it. And, as you reflect on whether you actually want to do that, consider the many other conversational approaches you might use instead.
* In retrospect I wish I'd considered this spectrum when discussing the liberty-to-hospitality spectrum.
# 03 Oct 2018, 03:13PM: A Reasonably Fast Way To Construct A Writing Portfolio:
Someone in my network wanted guidance in building a professional (often software-related) writing portfolio for the first time -- they want to give other people a portfolio of work they've already done, so that those people can consider hiring them for paid writing gigs. This person wanted advice on what to choose, how to curate and structure and present the portfolio, and whether to keep it private or publish it somewhere public.
Here's some free advice on getting started with that. I'm sure there are better ways to do this and be more polished in the final presentation, but here's how I suggested they get started.
- Start by taking 1-2 hours to assemble a big rough list of what you've written, in the last 10 years or so, that you might conceivably want to share in this portfolio. Be wide and inclusive, and if this feels overwhelming, remember that this does not have to be comprehensive -- you just want samples of different categories (like "neighbor-friendly explanations of technical topics", "bug reports", "profiles of individual people", "replies to support requests", "nonfiction essays", "research papers", "public conference talks", "HOWTO tutorials", and so on).
- Decide whether this will be a public or private portfolio. If most of these pieces are ones you really don't want to share, online, with the public, under your wallet name, then you're going to be doing this as a private portfolio. Otherwise you'll turn this into a page or subsite of your public website, and you can mention and summarize the private pieces and say "available upon request".
- Select the best 1-3 examples for each category. This might involve digging because maybe now you'll remember another category or another piece.
- If it's a private portfolio, turn the whole thing into a giant PDF with chapter headings explaining what category each item is in. If it's public, do that, but also make a webpage -- Heidi Waterhouse's list and Betsy Haibel's list are straightforward examples you could use as patterns. If you want to be a little more explanatory and give the reader more guidance about the context for each piece, you could do something like what Lindsey Kuper does. And if you want to get really intense about it you could make something like what I've done with the Changeset Consulting "resources" page, with stock art!
- Every few months, review what you've written and see what you need to add to the portfolio. (I also keep a public mega-list of nonfiction and fiction, art, software, and zines I've made and a big list of my past talks, interviews, and stand-up comedy, which helps me when it's time to update the Resources page for Changeset.)
If you've been thinking of making a writing portfolio and putting it off, I hope this structure makes it more feasible.
# 12 Apr 2018, 10:29AM: On Online Advice:
I'll be speaking on a panel, "Social Media in Theory and Praxis: What is at Stake Now?" at the City University of New York (CUNY) Graduate Center, in New York, NY, on Wednesday, April 18, 2018 (next week). It's partly about how "[u]se of digital platforms and tools like Facebook, Twitter, YouTube, Instagram, and Google has altered cultural production, political processes, economic activity, and individual habits." And recently I've been thinking about advice, and how blogging and other social media affect this very fundamental interpersonal act.
I am a giant weirdo. What works for me might not work for you, and vice versa. Advice is like a diet plan. Our gut biomes are so varied and poorly understood that very strange-sounding diets inevitably work for some fraction of the population, and commonplace diet advice inevitably snarls up some people's digestive tracts. Similarly, your career, your household, your everything exists in a unique ecosystem, and advice you find condescending or hurtful may work for someone else, or even you ten years ago or ten years from now.
Sometimes I don't explicitly-enough distinguish between things that work prescriptively for myself and things that make sense to prescribe for All Of Humanity, and between prescriptive truths and descriptive truths.
Sometimes things I write down, even publicly, are aspirational and prescriptive self-motivational slogans. Many years ago, reading Steve Pavlina's wacky blog, I learned the concept of "lies of success": statements that are descriptively false -- or at least nonprovable -- but prescriptively true, like, "if I work hard and try new approaches when I get stuck, I will learn this". No one can promise that "will", so, it's not 100% accurate descriptively. But you may as well choose your unprovable and nonfalsifiable beliefs to serve your growth.
In every social context, there are topics that are apt to cause discord if we so much as name them -- sometimes the very word stops some people from thinking, like "abortion" or "cybersecurity". And when I write publicly, especially online, I do not control what social context my words are read in. So, some of these topics we can talk about in terms of our own values if we are super careful to be subjective and descriptive rather than prescriptive, but sometimes I find it hard to frame the conversation productively.
It's salutary for me to remember the ways in which I am an unextrapolatable weirdo. I'm grateful to all y'all for reminding me, pushing back when I act like my gut biome is the world.
# 06 Feb 2018, 09:48AM: The Ambition Taboo As Dark Matter:
PyCon just rejected my talk submission,* so I'll try to finish and post this draft that I've been tapping at for ages.
My current half-baked theory is that programmers who want any public recognition from our peers, recognition that meaningfully validates our personal mastery, basically have to do that through one of a few fora that therefore accrue less-spoken emotional freight. And two of those places are code review in open source projects** and proposal review in tech conference talk submissions, and the fact that we don't talk enough about the role of ambition when talking about these processes leads to unnecessary hurt feelings.
For context: We give talks for varied reasons. To teach, to make reusable documentation, to show off things we've made or things we know, to burnish our credentials and thus advance our careers, to serve our corporate brands' goals, to provide role models for underindexed folks from our demographics, to give a human face to a project and make it more approachable, it goes on.
A conference talk is a tool in a toolbox that has a lot of other tools in it. (The Recompiler, Linux Journal and LWN pay for articles, for instance.)
And conferences are more than lecture halls, of course -- they're networking opportunities, communities of practice, parties, vacations, sprints, and so on.
But when we talk about the particular pain or joy of having a talk accepted or rejected from a conference, there's an emotional valence here that isn't just about the usefulness of a talk or the community of a conference. We're talking about acceptance as a species of public professional recognition.
I've found it pretty useful to think about public professional recognition in the context of Dr. Anna Fels's book Necessary Dreams. She points out that the childhood or adolescent desire for fame is often a precursor to a more nuanced ambition, combining the urge to master some domain or skill with the desire for the recognition of one's peers or community. This influences how I think about awards, about job titles, and about encouraging technologists in the public interest, and about the job market's role in skill assessment.
So how can a programmer pursue public mastery validation? Here's what I see:***
- contributing to open source software (mastery validation: maintainers merging commits and thanking/crediting contributor for work)
- presenting at conferences (mastery validation: program committee accepting talk)
- posting comments to gamified platforms like Reddit, Hacker News, and Stack Overflow (mastery validation: upvotes and replies)
- publishing academic research (mastery validation: journal accepting paper, peers reviewing paper positively)
- writing books (mastery validation: publisher accepting & publishing book)
- starting and architecting technically challenging projects (mastery validation: skilled technologists cofounding with or working for you, or relying on or praising your work)
So, this stuff is fraught; let's not pretend it's not. And we get rejected sometimes by conferences and talk about it, try to take the perspective that we're collecting "no's", we remind others that even successful and frequent speakers get rejected a lot and you can choose not to give up. And we give each other tips on how to get better at proposing talks. And that's all useful. But there's also another level of advice I want to give, to repeat something I said last year:
I try not to say "don't get discouraged," because to me that sounds like telling someone not to cry or telling someone to calm down. It's a way of saying "stop feeling what you're feeling." Instead, I try to acknowledge that something is discouraging but also -- if the other person's ready to hear it -- that we can come back from that: your feelings are legitimate, and here are some ways to work with them.
Some advice I hear about bouncing back from a conference talk rejection involves formalizing, creating systems to use to get better at writing proposals (my own tips mostly fall into this category) -- after all, in programming, you can learn to make better and better things without directly interacting with or getting feedback from individuals. The code compiles, the unit tests pass. And that can be soothing, because you can get the feedback quickly and it's likely to be a flavor of fair. (But that computer rarely initiates the celebration, never empathizes with you about the specific hard thing you're doing or have just done, and rarely autocredentials you to do something else that has a real impact on others.)
To formalize and abstract something makes it in some ways safer; it's safer to say "I'm working to pass the [test]" or "I'm building a [hard thing] implementation" or "I'm submitting a talk to [conference]" than to say "I am working to gain the professional respect of my profession". But that is one motivation for people to submit talks to tech conferences and to feel good or bad about the talks they give.
So part of my advice to you is: go ahead and be honest with yourself about how you feel. Rejection can be hard, working to get an unaccountable gatekeeper's acceptance**** and failing to get public professional recognition in your chosen field is a cause of anxiety, and so on. Be honest about how discouraging that can feel, and why, and what you wanted that you didn't get.
And another part of my advice is that I will ask, like the annoying programmer I am: what problem are you trying to solve? Because there are probably a lot of ways there that don't involve this particular gatekeeper.
And the most annoyingly empowering part of my advice is: Humans created and run PyCon and TED and Foo Camp and all the other shiny prestigious things; you're a human and you could do so too. Especially if you acknowledge not just your own but others' ambition, and leverage it.
* Maybe we'll do it in an open space anyhow.
** Another blog post for another time!
*** I've left some things out here.
We have some awards, e.g., ACM Distinguished Member, that you might get if you work really hard for decades in certain fields. That feels too far away for the kind of thing I'm thinking about.
I've left out the possibility of being promoted at your job, because many technologists perceive engineering job promotions as not particularly correlating with the quality of one's work as a programmer, which means a promotion doesn't send a strong signal, understood by peers outside one's organization, of validation of programming mastery. Then again, if your organization is old enough or big enough, maybe the career ladder there does constitute a useful proxy for the mental models of the peers whose judgment you care about.
I've left out various certifications, diplomas and badges because I don't know of any that meaningfully signal validation of one's mastery as a programmer industry-wide. And there's a lot of stuff to parse out that I feel undecided about, e.g., I find it hard to distinguish the status symbol aspect of admission from the signal that the final credential sends. And: A lot of people in this industry find it impressive when someone has been admitted to certain postsecondary engineering programs, regardless of whether the person graduates. And: In my opinion, the Recurse Center is an experience that has an unfortunate and unintended reputation for gatekeeping on the basis of programming skill, such that a big subset of people who apply and are rejected experience this as an authoritative organization telling them that they are not good enough as programmers (and Google Summer of Code and Outreachy have a related problem).
Of course, go ahead, write your own blog post where you talk about how wrong I am about what I list or exclude, especially because I come from a particular corner of the tech industry and I'm sure there's stuff I don't perceive.
**** Some conferences' gatekeepers are more unaccountable than others'; regardless, the feeling from the rejectee's point of view is, I bet, mostly the same. And you can start your own conference or join the program committee of an existing conference to see what it's like from the other side of the desk and wield a bit of the power yourself.
# 26 Jan 2018, 06:28PM: Preserving Threading In Google Group or Mailman Mailing List Replies with Thunderbird:
Have you ever wanted to reply to a mailing list post that wasn't in your inbox? I had that problem yesterday; here's how I fixed it.
Context: I'm the project manager for Warehouse, the software behind the new Python Package Index (PyPI) which -- thanks to funding from Mozilla and support from the Python Software Foundation -- is on its way to launching and replacing the old PyPI. I've been in the Python community for years, but -- just as when I went from "casual Wikipedian" to "Wikimedia Foundation staffer" -- I'm learning about lots of pockets of the Python community that I didn't yet know about. Specifically, Python packaging has a lot of different repositories and mailing lists. One of them is the Google Group pypa-dev, a mailing list for developers within the Python Packaging Authority.
I joined pypa-dev recently -- and, in skimming the archives, I found a months-old message I wanted to reply to while preserving threading (so that future folks and longtime subscribers would see the update in context). So I clicked on the dropdown menu in the upper right corner for that post and clicked "Show original", which got me the Message-ID header. But how could I get Thunderbird to let me write a reply with the appropriate In-Reply-To header? Preferably without having to install some extension to munge my headers?
This reply to a StackExchange answer got me most of the way there; the basic approach is the same whether you're working with a Google Group or a Mailman list. (If it's a Google Group or a Mailman 3 list, you can of course reply via the web interface, but maybe you want to cc someone or have the history in your Sent folder, or you just prefer composing in Thunderbird.)
- First, you need to get the raw text, so you can get the Message-ID.
If you're looking at a Google Group message (example), click on the dropdown menu in the upper right corner for that post and choose "Show original" (example), then click the "Show only message" button to get a raw text page like this.
If you're looking at a Mailman 2 message (example), then navigate to the monthly archive. You can get there by clicking on the "More information about the [name] list" link at the bottom of the page, which takes you to a list info page (example), and from there, the "Visit the [name] Archives." link (example). Here on the archives-by-month page, download the archive for the month that has the message (using the "[ Gzip'd Text [filesize] ]" link in the "Downloadable version" column). And now you can, for instance, gunzip 2018-January.txt.gz in your terminal to get 2018-January.txt which you can search to find the post you want to reply to.
If you're looking at a Mailman 3 message (example), look at the bottom of the left navbar for a "Download" button (hover text: "this thread in gzipped mbox format"). If you gunzip that you'll get a plain-text .mbox file which you can search to find the post you want to reply to.
- Now, no matter what mailing list software you had to wrangle, save the raw message as a temporary file with a .eml extension, e.g., /tmp/post.eml, to smooth the way for Thunderbird and your OS to think of this as a saved email message. If you're looking at a Mailman archive, this is where you select just that one message (headers and body) from the .txt or .mbox file and cut-and-paste it into a standalone .eml file.
- Open that file in Thunderbird: File menu, select Open, select Saved message, and navigate to /tmp/post.eml and open it.
- If all's gone well, the message pops open in its own window, complete with Reply and Reply All buttons! Go ahead and use those. Note that the From: and To: lines have been obfuscated or partially truncated to protect against spammers, so you'll probably need to fix those by hand, e.g., replacing at with @ and fixing any ellipses (...).
- Hit Send with the glow of thread-preservation satisfaction. Watch for your post to show up, properly threaded, in the list archives (example).
# (1) 07 Apr 2017, 03:36PM: Inclusive-Or: Hospitality in Bug Tracking:
Lindsey Kuper asked:
I’m interested in hearing about [open source software] projects that have successfully adopted an "only insiders use the issue tracker" approach. For instance, a project might have a mailing list where users discuss bugs in an unstructured way, and project insiders distill those discussions into bug reports to be entered into the issue tracker. Where does this approach succeed, and where does it fail? How can projects that operate this way effectively communicate their expectations to non-insider users, especially those users who might be more accustomed to using issue trackers directly?
More recently, Jillian C. York wrote:
...sick of "just file a bug with us through github!" You realize that's offputting to your average users, right?
If you want actual, average users to submit bugs, you know what you have to do: You have to use email. Sorry, but it's true.
Oh, and that goes especially for high-risk users. Give them easy ways to talk to you. You know who you are, devs.
Both Kuper and York get at: How do we open source maintainers get the bug reports we need, in a way that works for us and for our users?
My short answer is that open source projects should have centralized bug trackers that are as easy as possible to work in as an expert user, and that they should find automated ways to accept bug reports from less structured and less expert sources. I'll discuss some examples and then some general principles.
Dreamwidth: Dreamwidth takes support questions via a customer support interface. The volunteers and paid staff answering those questions sometimes find that a support request reveals a bug, and then file it in GitHub on the customer's behalf, then tell her when it's fixed. (Each support request has a private section that only Support can see, which makes it easier to track the connection between Support requests and GitHub issues, and Support regulars tend to have enough ambient awareness of both Support and GitHub traffic to speak up when relevant issues crop up or get closed.) Dreamwidth users and developers who are comfortable using the GitHub issue tracker are welcomed if they want to file bugs there directly instead.
Dreamwidth also has a non-GitHub interface for feature suggestions: the suggestions form is the preferred interface for people to suggest new features for Dreamwidth. Users post their suggestions into a queue and a maintainer chooses whether to turn that suggestion into a post for open discussion in the dw-suggestions community, or whether to bounce it straight into GitHub (e.g., for an uncontroversial request to whitelist a new site for media embedding or add a new site for easy cross-site user linking, or at the maintainer's prerogative). Once a maintainer has turned a suggestion into a post, other users use an interface familiar to them (Dreamwidth itself) to discuss whether they want the feature. Then, if they and the maintainer come to consensus and approve it, the maintainer adds a ticket for it to GitHub. That moderation step has been a bottleneck in the past, and the process of moving a suggestion into GitHub also hasn't yet been automated.
Since discussion about site changes needs to include users who aren't developers, Dreamwidth maintainers prefer that people use the suggestions form; experienced developers sometimes start conversations in GitHub, but the norm (at least the official norm) is to use dw-suggestions; I think the occasional GitHub comment suffices for redirecting these discussions.
Zulip: We use GitHub issues. The Zulip installations hosted by Kandra Labs (the for-profit company that stewards the open source project) also have a "Send feedback" button in one of the upper corners of the Zulip web user interface. Clicking this opens a private message conversation with feedback-at-zulip.com, which users used more heavily when the product was younger. (We also used to have a nice setup where we could actually send you replies in-Zulip, and may bring that back in the future.)
I often see Tim Abbott and other maintainers noticing problems that new users/customers are having and, while helping them (via the zulip-devel mailing list, via the Zuliping-about-Zulip chat at chat.zulip.org, or in person), opening GitHub issues about the issue, as the next step towards a long-term fix. But -- as with the Dreamwidth example -- it is also fine for people who are used to filing bug reports or feature requests directly to go ahead and file them in GitHub. And if Tim et alia know that the person they're helping has that skill and probably has the time to write up a quick issue, then the maintainers will likely say, "hey would you mind filing that in GitHub?"
We sometimes hold live office hours at chat.zulip.org. At yesterday's office hour, Tim set up a discussion topic named "warts" and said,
I think another good topic is to just have folks list the things that feel like they're some of our uglier/messier parts of the UI that should be getting attention. We can use this topic to collect them :).
Several people spoke up about little irritations, and we ended up filing and fixing multiple issues. One of Zulip's lead developers, Steve Howell, reflected: "As many bug reports as we get normally, asking for 'warts' seems to empower customers to report stuff that might not be considered bugs, or just empower them to speak up more." I'd also point out that some people feel more comfortable responding to an invitation in a synchronous conversation than initiating an asynchronous one -- plus, there's the power of personal invitation to consider.
As user uptake goes up, I hope we'll also have more of a presence on Twitter, IRC, and Stack Overflow in order to engage people who are asking questions there and help them there, and get proto-bug reports from those platforms to transform into GitHub issues. We already use our Twitter integration to help -- if someone mentions Zulip in a public Tweet, a bot tells us about it in our developers' livechat, so we can log into our Twitter account and reply to them.
MediaWiki and Wikimedia: Wikipedia editors and other contributors have a lot of places they communicate about the sites themselves, such as the technical-issues subforum of English Wikipedia's "Village Pump", and similar community-conversation pages within other Wikipedias, Wikivoyages, etc. Under my leadership, the team within Wikimedia Foundation's engineering department that liaised with the larger Wikimedia community grew more systematic about working with those Wikimedia spaces where users were saying things that were proto-bug reports. We got more systematic about listening for those complaints, filing them as bugs in the public bug tracker, and keeping in touch with those reporters as bugs progressed -- and building a kind of ambassador community to further that kind of information dissemination. (I don't know how well that worked out; I think we built a better social infrastructure for people who were already doing that kind of volunteer work ad hoc, but I don't know whether we succeeded in recruiting more people to do it, and I haven't kept a close eye on how that's gone in the years since I left.)
We also worked to make it easy for people to report bugs into the main bug tracker. The Bugzilla installation we had for most of the time that I was at Wikimedia had two bug reporting forms: a "simple" submission form that we pointed most people to, with far fewer fields, and an "advanced" form that Wikimedia-experienced developers used. They've moved to Phabricator now, and I don't know whether they've replicated that kind of two-lane approach.
A closed-source example: FogBugz. When I was at Fog Creek Software doing sales and customer support, we used FogBugz as our internal bug tracker (to manage TODOs for our products,* and as our customer relationship manager). Emails into the relevant email addresses landed in FogBugz, so it was easy for me to reply directly to help requests that I could fix myself, and easy for me to note "this customer support request demonstrates a bug we need to fix" and turn it into a bug report, or open a related issue for that bug report. If I recall correctly, I could even set the visibility of the issue so the customer could see it and its progress (unusual, since almost all our issue-tracking was private and visible only within the company).
An interface example: Debian. Debian lets you report bugs via email and via the command-line reportbug program. As the "how to use BTS" guide says,
some spam messages managed to send mails to -done addresses. Those are usually easily caught, and given that everything can get reverted easily it's not that troublesome. The package maintainers usually notice those and react to them, as do the BTS admins regularly.
The BTS admins also have the possibility to block some senders from working on the bug tracking system in case they deliberately do malicious things.
But being open and inviting everyone to work on bugs totally outweighs the troubles that sometimes pop up because of misuse of the control bot.
And that leads us to:
General guidelines: Dreamwidth, Zulip, MediaWiki, and Debian don't discourage people from filing bug reports in the official central bug tracker. Even someone quite new to a particular codebase/project can file a very helpful and clear bug report, after all, as long as they know the general skill of filing a good bug report. Rather, I think the philosophy is what you might find in hospitable activism in general: meet people where they are, and provide a means for them to conveniently start the conversation in a time, place, and manner that's more comfortable for them. For a lot of people, that means email, or the product itself.
Failure modes can include:
- a disconnect among the different "places" such that the central bug tracker is a black hole and nothing gets reported back to the more accessible place or the original reporter
- a feeling of elitism where only special important people are allowed to even comment in the main bug tracker
- bottlenecks such that it seems like there's a non-bug-tracker way to report a question or suggestion but that process has creaked to a halt and is silently blocking momentum
- bottlenecks in bug triage
- brusque reaction at the stage where the bug report gets to the central bug tracker (e.g., "oh that's a duplicate; CLOSE" without explanation or thanks), which jars the user (who's expecting more explicit friendliness) and which the user perceives as hostile
Whether or not you choose to increase the number of interfaces you enable for bug reporting, it's worth improving the user experience for people reporting bugs into your main bug tracker. Tedious, lots-of-fields issue tracker templates and UIs decrease throughput, even for skilled bug reporters who simply aren't used to the particular codebase/project they're currently trying to file an issue about. So we should make that easier. You can provide an easy web form, as Wikimedia did via the simplified Bugzilla form, or an email or in-application route, as Debian does.
And FLOSS projects oughta do what the Accumulo folks did for Kuper, too, saying, "I can file that bug for you." We can be inclusive-or rather than exclusive-or about it, you know? That's how I figure it.
* Those products were CityDesk, Copilot, and FogBugz -- this was before Kiln, Stack Overflow, Trello, and Glitch.
Thanks to Lindsey Kuper and Jillian C. York for sparking this post, and thanks to azurelunatic for making sure I got Dreamwidth details right.
# (1) 04 Apr 2017, 12:37PM: How to Teach And Include Volunteers who Write Poor Patches:
You help run an open source software community, and you've successfully signalled that you're open to new contributors, including people who aren't professional software engineers. And you've already got an easy developer setup process and great test coverage so it's easy for new people to get up and running fast. Great!
Some of the volunteers who join you are less-skilled programmers, and they're submitting pull requests/patches that need a lot of review and reworking before you can merge them.
How do you improve these volunteers' work, help them do productive things for the project, and encourage and include them?
My suggestions for you fall into three categories: helping them
improve their code, dealing with the poor-quality pull requests
themselves, and redirecting their energies to improve the project in other ways.
Teaching them to improve their code
- Collect and suggest relevant learning resources, like certain talk recordings or freely available articles/exercises (e.g. The Architecture of Open Source Applications), and ask them to come back after they've watched/read/done them. Example: Zulip's collection.
- If developers have trouble writing good comments and commit messages, or diving into the codebase to find relevant files and commits, point them to my blog post "On the scientific method and usable history". It explains why it's important to do that, and gives them pointers.
- Ask more experienced contributors to pair program with them, both as leader and as follower. Here are a few tools to help.
- Run live coding exercises, over chat or video, where an experienced developer speaks aloud as she writes a bugfix, including all the little steps like searching for related commits, setting up and running tests, etc. This enables newer developers to learn a lot of tips that help them work faster and write higher-quality code. I've done this at Wikimedia with live video and we use Zulip for a live text approach (see Alicja Raszkowska's transcript and notes of one such session).
- If a big problem with their submissions is poor English writing skills, run some English tutoring sessions.
Dealing with poor patches themselves
Using their knowledge and curiosity to improve the project in other ways
This list is absolutely not the be-all and end-all for this topic; I'd like to know what approaches others use.
- Ask these developers to write "discovery reports". They're already user-testing your developer onboarding process; ask them for their experiences, so you can find and fix pain points.
- Ask them to run through some manual testing (example manual testing guide from Zulip), and to tell you how long certain kinds of tests took, so you can get bug reports and improve the docs.
- Ask them to teach about your project in their communities -- to develop learning and presentation materials and speak at meetups. You may have just found your most enthusiastic marketer.
Thanks to Noah Swartz for starting a conversation at Maintainerati that spurred me to write this post.
# 16 Mar 2017, 05:37PM: What Does An Award Do?:
I posted on MetaFilter about the new Disobedience Award that MIT Media Lab is starting (nomination deadline: May 1st). And in the comments there, I stumbled into talking about why one might found an award, and thought it was worth expanding a bit here.
I think anyone who thinks for a second about awards -- assuming the judgment is carried out in good faith -- says, well, it's to reward excellence. Yup! But what are the particular ways an award rewards excellence, and when might an award be a useful tool to wield?
Let's say you are an organization and you genuinely want to celebrate and encourage some activity or principle, because you think it's important and there's not enough of it, particularly because there are so many norms and logistical disincentives pushing to reduce it. For example, you might want to encourage altruistic resistance. Let's say your organization already has a bunch of ongoing processes, like teaching or making products or processing information, and maybe you make some changes in those processes to increase how likely it is that you're encouraging altrustic resistance, but that isn't really apparent to the world outside your doors in the near term, and the effects take a while to percolate out.
So maybe you could set up an award. An award can:
- get publicity for the idea that altruistic resistance is a thing to celebrate
- help one specific person or group who's currently practicing altruistic resistance keep going, with money and attention, and make a big difference to their stamina and effectiveness
- maybe bring attention to a list of finalists and help their work get more coverage
- ensure the award administrators (and any judging committee involved) and, to a lesser extent, the reporters covering the award, will spend time thinking about the importance of altruistic resistance
- cause a bunch of people to think "hmm, whom should I nominate?" and write a couple paragraphs about why their work is good and award-worthy (and, by causing that writing, also solidify the nominators' commitment to respecting and rewarding altruistic resistance)
- demonstrate your institutional commitment to altruistic resistance, potentially sending a hard-to-ignore message to your future self to guide future decisions
And if an award keeps going and catches on, then people start using it as a shorthand for a goal. New practitioners can dream of winning the acclamation that a Pulitzer, a Nobel, a Presidential Medal of Freedom carries. If there's an award for a particular kind of excellence, and the community keeps records of who wins that award, then in hard moments, it can be easier for a practitioner to think of that roll call of heroes and say to herself in hard moments, "keep on going". We put people on pedestals not for them, but for us, so it's easier for us to see them and model ourselves after them.
So, all awards are simplistic summative judgments, but if the problem is that we need to balance the scales a bit, maybe it'll help anyway.
Nalo Hopkinson is doing it via the Lemonade Award for kindness in the speculative fiction community. The Tiptree Award does it for the expansion & exploration of gender. Open Source Bridge does it for community-making in open source with the Open Source Citizenship Award for "someone who has put in extra effort to share knowledge and make the open source world a better place."* It's worth considering: in your community, do people lack a way to find and celebrate a particular sort of excellence? You have a lot of tools you could wield, and awards are one of them.
* I realized today that I don't think the list of past Open Source Citizenship Award recipients is in one place anywhere! Each of these people was honored with a "Truly Outstanding Open Source Citizen" medal or plaque by the Open Source Bridge conference to celebrate our engagement "in the practice of an interlocking set of rights and responsibilities."
# 15 Oct 2016, 01:55PM: New Zine "Playing With Python: Two of My Favorite Lenses":
MergeSort, the feminist maker meetup I co-organize, had a table at Maker Faire earlier this month. Last year we'd given away (and taught people how to cut and fold) a few of my zines, and people enjoyed that. A week before Maker Faire this year, I was attempting to nap when I was struck with the conviction that I ought to make a Python zine to give out this year.
So I did! Below is Playing with Python: 2 of my favorite lenses. (As you can see from the photos of the drafting process, I thought about mentioning pdb, various cool libraries, and other great parts of the Python ecology, but narrowed my focus to bpython and python -i.)
Playing with Python
2 of my favorite lenses
[magnifying glass and eyeglass icons]
by Sumana Harihareswara
When I'm getting a Python program running for the 1st time, playing around & lightly sketching or prototyping to figure out what I want to do, I [heart]:
bpython & python -i
[illustrations: sketch of a house, outline of a house in dots]
bpython is an exploratory Python interpreter. It shows what you can do with an object:
>>> dogs = ["Fido", "Toto"]
append count extend index insert pop remove reverse sort
And, you can use Control-R to undo!
[illustrations: bpython logo, pointer to cursor after dogs.]
Use the -i flag when running a script, and when it finishes or crashes, you'll get an interactive Python session so you can inspect the state of your program at that moment!
$ python -i example.py
Traceback (most recent call last):
File "example.py", line 5, in
toprint = varname + "entries"
TypeError: unsupported operand type(s) for + : 'int' and 'str'
[illustration: pointer to type(varname) asking, "wanna make a guess?"]
More: "A Few Python Tips"
This zine made in honor of
NYC's feminist makerspace!
CC BY-SA 2016 Sumana Harihareswara
Everyone has something to teach;
everyone has something to learn.
Here's the directory that contains those thumbnails, plus a PDF to print out and turn into an eight-page booklet with one center cut and a bit of folding. That directory also contains a screenshot of the bpython logo with a grid overlaid, in case you ever want to hand-draw it. Hand-drawing the bpython logo was the hardest thing about making this zine (beating "fitting a sample error message into the width allotted" by a narrow margin).
Libby Horacek and Anne DeCusatis not only volunteered at the MergeSort table -- they also created zines right there and then! (Libby, Anne.) The software zine heritage of The Whole Earth Software Review, 2600, BubbleSort, Julia Evans, The Recompiler, et alia continues!
(I know about bpython and python -i because I learned about them at the Recurse Center. Want to become a better programmer? Join the Recurse Center!)
# (1) 12 Oct 2016, 11:00AM: Rough Notes for New FLOSS Contributors On The Scientific Method and Usable History:
Some thrown-together thoughts towards a more comprehensive writeup. It's advice on about how to get along better as a new open source participant, based on the fundamental wisdom that you weren't the first person here and you won't be the last.
We aren't just making code. We are working in a shared workplace, even if it's an online place rather than a physical office or laboratory, making stuff together. The work includes not just writing functions and classes, but experiments and planning and coming up with "we ought to do this" ideas. And we try to make it so that anyone coming into our shared workplace -- or anyone who's working on a different part of the project than they're already used to -- can take a look at what we've already said and done, and reuse the work that's been done already.
We aren't just making code. We're making history. And we're making a usable history, one that you can use, and one that the contributor next year can use.
So if you're contributing now, you have to learn to learn from history. We put a certain kind of work in our code repositories, both code and notes about the code. git grep idea searches a code repository's code and comments for the word "idea", git log --grep="idea" searches the commit history for times we've used the word "idea" in a commit message, and git blame codefile.py shows you who last changed every line of that codefile, and when. And we put a certain kind of work into our conversations, in our mailing lists and our bug/issue trackers. We say "I tried this and it didn't work" or "here's how someone else should implement this" or "I am currently working on this". You will, with practice, get better at finding and looking at these clues, at finding the bits of code and conversation that are relevant to your question.
And you have to learn to contribute to history. This is why we want you to ask your questions in public -- so that when we answer them, someone today or next week or next year can also learn from the answer. This is why we want you to write emails to our mailing lists where you explain what you're doing. This is why we ask you to use proper English when you write code comments, and why we have rules for the formatting and phrasing of commit messages, so it's easier for someone in the future to grep and skim and understand. This is why a good question or a good answer has enough context that other people, a year from now, can see whether it's relevant to them.
Relatedly: the scientific method is for teaching as well as for troubleshooting. I compared an open source project to a lab before. In the code work we do, we often use the scientific method. In order for someone else to help you, they have to create, test, and prove or disprove theories -- about what you already know, about what your code is doing, about the configuration on your computer. And when you see me asking a million questions, asking you to try something out, asking what you have already tried, and so on, that's what I'm doing. I'm generally using the scientific method. I'm coming up with a question and a hypothesis and I'm testing it, or asking you to test it, so we can look at that data together and draw conclusions and use them to find new interesting questions to pursue.
So I'll ask a question to try and prove or disprove my hypothesis. And if you never reply to my question, or you say "oh I fixed it" but don't say how, or if you say "no that's not the problem" but you don't share the evidence that led you to that conclusion, it's harder for me to help you. And similarly, if I'm trying to figure out what you already know so that I can help you solve a problem, I'm going to ask a lot of diagnostic questions about whether you know how to do this or that. And it's ok not to know things! I want to teach you. And then you'll teach someone else.
- Expected result: doing run-dev.py on your machine will give you the same results as on mine.
- Actual observation: you get a different result, specifically, an error that includes a permissions problem.
- Hypothesis: the relevant directories or users aren't set up with the permissions they need.
- Next step: Request for further data to prove or disprove hypothesis.
In our coding work, it's a shared responsibility to generate hypotheses and to investigate them, to put them to the test, and to share data publicly to help others with their investigations. And it's more fruitful to pursue hypotheses, to ask "I tried ___ and it's not working; could the reason be this?", than it is to merely ask "what's going on?" and push the responsibility of hypothesizing and investigation onto others.
This is a part of balancing self-sufficiency and interdependence. You must try, and then you must ask. Use the scientific method and come up with some hypotheses, then ask for help -- and ask for help in a way that helps contribute to our shared history, and is more likely to help ensure a return-on-investment for other people's time.
So it's likely to go like this:
- you try to solve your problem until you get stuck, including looking through our code and our documentation, then start formulating your request for help
- you ask your question
- someone directs you to a document
- you go read that document, and try to use it to answer your question
- you find you are confused about a new thing
- you ask another question
- now that you have demonstrated that you have the ability to read, think, and learn new things, someone has a longer talk with you to answer your new specific question
- you and the other person collaborate to improve the document that you read in step 4 :-)
This helps us make a balance between person-to-person discussion and documentation that everyone can read, so we save time answering common questions but also get everyone the personal help they need. This will help you understand the rhythm of help we provide in livechat -- including why we prefer to give you help in public mailing lists and channels, instead of in one-on-one private messages or email. We prefer to hear from you and respond to you in public places so more people have a chance to answer the question, and to see and benefit from the answer.
We want you to learn and grow. And your success is going to include a day when you see how we should be doing things better, not just with a new feature or a bugfix in the code, but in our processes, in how we're organizing and running the lab. I also deeply want for you to take the lessons you learn -- about how a group can organize itself to empower everyone, about seeing and hacking systems, about what scaffolding makes people more capable -- to the rest of your life, so you can be freer, stronger, a better leader, a disruptive influence in the oppressive and needless hierarchies you encounter. That's success too. You are part of our history and we are part of yours, even if you part ways with us, even if the project goes defunct.
This is where I should say something about not just making a diff but a difference, or something about the changelog of your life, but I am already super late to go on my morning jog and this was meant to be a quick-and-rough braindump anyway...
# 04 Aug 2016, 03:51PM: Advice on Starting And Running A New Open Source Project:
Recently, a couple of programmers asked me for advice on starting and running a new open source project. So, here are some thoughts, assuming you're already a programmer, you haven't led a team before, and you know your new software project is going to be open source.
I figure there are a few different kinds of best practices in starting and running open source projects.
General management: Some of my recommendations are the same kinds of best practices that are useful anytime you're starting/running/managing any kind of project, inside or outside the software world.
For instance: know why you're starting this thing. Write down even just a one-paragraph or 100-word bulleted list description of what you are aiming at. This will reduce the chance that you'll look up one day and see that your targeted little tool has turned into a mess that's trying to be an entire operating system.
And: if you're making something that you want other people to use, then check what those other people are already using/doing, so you can make sure you suit their needs. This guards against any potential perception that you are starting a new project thoughtlessly, or just for the heck of it, or to learn a new framework. In the software world, this includes taking note of your target users' dependencies (e.g., the versions of Python/NumPy that they already have installed).
Resources I have found useful here include William Ball's book on theatrical direction A Sense of Direction, Dale Carnegie's How to Win Friends and Influence People, Fisher & Ury's Getting To Yes, Cialdini's Influence: The Science of Persuasion, and Ries & Trout's Positioning: The Battle for Your Mind.
Tech management: Some best practices are the same kinds of habits that help in managing any kind of software project, including closed-source projects as well.
For instance: more automated tests in/for your codebase are better, because they reduce regressions so you can move faster and merge others' code faster (and let others review and merge code faster), but don't sweat getting to 100%, because there's definitely a decreasing marginal utility to this stuff. Travis CI is pretty easy to set up for the common case.
I assume you're using Git. Especially if you're going to be the maintainer on a code level, learn to use Git beyond just push and pull. Clone a repo of a project you don't care about and try the more advanced commands as you make little changes to the code, so if you ruin everything you haven't actually set your own work back. Learn to branch and merge and work with remotes and cherry-pick and bisect. Read this super useful explanation of the Git model which articulates what's actually doing what -- it helps.
Good resources here include Brooks's The Mythical Man-Month, DeMarco & Lister's Peopleware, Heidi Waterhouse's "The Seven Righteous Fights", Camille Fournier's blog, and my own talk "Learn Tech Management in 45 Minutes" and my article "Software in Person". I myself earned a master's in technology management and if you are super serious about becoming a technology executive then that's a path I can give more specific thoughts on, but I'm not about to recommend that amount of coursework to someone who isn't looking to make a career out of this.
Open source management: And some best practices are the specific social, product management, architectural, and infrastructural best practices of open source projects. A few examples:
If you're the maintainer, it's key to reply to new project-related emails, queries, bug reports, and patches fast; a Mozilla analysis backs up our experience that a kind, fast, negative response is better than a long silent delay. Reply to people fast, even if it's just "I saw this, thank you, I'm busy, will get to this in a few weeks," because otherwise the uncertainty is deathly and people's enthusiasm and momentum drip away.
Make announcements somewhere public and easily findable that say something about the current state of your project, e.g., about whether it's ready to use or when to expect it to be. This could even just be someplace prominent in your README when you're just getting started. This is also a good place to mention if you're going to be at any upcoming conferences, so people can connect to you that way.
Especially when it comes to code, docs, and bug/feature/task lists, work in the open from as early as possible, preferably from the start. Treat private work as a special case (sometimes a useful one when it comes to communication with users and with new contributors, as a tidepool incubates growth that can then flow into the ocean).
I am sad, as a FLOSS zealot, to say that you should probably be on the closed-source platform that is GitHub. But yeah, the intake funnel for code and bug contributors is easier on GitHub than on any other platform; unless you are pretty sure you already know who all the people are who will use and improve this software, and they're all happy on GitLab or similar, GitHub is going to get you more and faster contributors.
You are adjacent to or embedded in other programming communities, like the programming language & frameworks you're using. Use the OSI-approved license that the projects you're adjacent to/depending on use, to make reuse easier.
It's never too early to think about governance. As Christie Koehler of Authentic Engine warns, to think about codes of conduct, you also gotta think about governance. (The Contributor Covenant is a popular starting point.) If you can be under the umbrella of a software-related nonprofit, like NumFOCUS, that'll help you make and implement these choices.
Top reading recommendation: Karl Fogel's Producing OSS is basically the bible for this category, and the online version is up-to-date with new advice from this year. If you read Producing OSS cover-to-cover you will be entirely set to start and run your project.
Additionally: Fogel also co-wrote criteria for assessing whether a project "is created and managed in a sustainably open source way". And I recommend my own blog post "How To Improve Bus Factor In Your Open Source Project", the Linux Foundation CII criteria (hat-tip to Benjamin Gilbert), "build your own rockstars" by one of the founders of the Dreamwidth project, and "dreamwidth as vindication of a few cherished theories" by that same founder (especially the section starting "our development environment and how we managed to create a process and culture that's so welcoming").
Obligatory plug: I started Changeset Consulting, which provides targeted project management and release management services for open source projects and the orgs that depend on them. In many ways I am maintainer-as-a-service. If you want to talk more about this work, please reach out!
# 29 Mar 2016, 08:01PM: Tips To Increase Your Conference Talk Acceptance Rate:
This year I submitted talks to several tech conferences and got a higher acceptance rate than I had been used to. For instance, this year I will speak for the first time at OSCON and PyCon North America, conferences that had previously rejected my proposals.
Why did this happen? I am a more polished and experienced speaker than I was in previous years, yes; program committees can see more videos and read more transcripts of my past talks. I have a better résumé and more personal connections. And through practice, I've gotten past the "get ideas on the page" stage of writing conference proposals, and learned how to better suggest a useful talk relevant to the audience.
Those factors you can't replicate today. But some, you can. Here they are. I believe following these tips will increase the acceptance rate for your software conference talk submissions.
Learn what they need.
Check whether you already know someone involved in selecting talks for the conference or a sub-track. If you do, ask them if they have any topic gaps you could help fill. Maybe they've gotten no talks yet on, say, Python web frameworks other than Django; they might specifically encourage you. And even if you don't know the con-runners personally, check their social media presences, in case they've spoken there and specifically asked for more talks about certain topics, or more talks from people from underrepresented perspectives or groups (example).
If this conference has happened before, look at the previous year's schedule; look for the range of the possible. What is this conference's universe of discourse? This helps you match the audience's interests and helps you avoid duplicating a conference talk from last year. What topics are adjacent to the topics they already seem to have covered?
Speaking of duplication, here's my take on "repeating yourself":
Tell them if you gave the talk already.
If you've given the talk before, say so, and link to the text, transcript and video/audio. Having all this info ready in your comprehensive "past talks" page is handy (see below). They can try before they buy! If you gave the talk to a standing-room only crowd and great acclaim, say so! And even if that was not the case, if you've already given this talk at another conference, now they know that it was good enough for someone else already, which is a useful social signal. I delivered "HTTP Can Do That?!" at Open Source Bridge last year, and in 2016 PyCon North America and Great Wide Open accepted my proposal to give nearly the same talk.
Different people and subcommunities follow different norms or rules of etiquette in distinguishing rude repetitiveness from sensible re-use. You might be able to lead the same multi-hour skill-building workshop at one convention multiple years in a row. I don't think I'll be re-proposing "HTTP Can Do That?!" in 2017 at any conferences; a 2-year exposure window feels all right to me.
More on reuse:
Reuse your blog post that went viral.
Basing the talk on a blog post or article of yours that got a lot of responses is great. Point to it and to the long comment threads, response blog posts, etc. This demonstrates that the community is interested in your thoughts on this topic, and that you've already thought about it a lot. I have successfully done this by turning my "Inessential Weirdnesses in Open Source" blog post from 2014 into a talk LibrePlanet and OSCON accepted this year.
I know, I said I'd only give you advice you can replicate today. I have no secrets to share about the internet fame lottery. But you can think aloud in a blog post or a Model View Culture essay or LWN article, and get feedback that helps you sharpen your message. Or, if you found someone else's article super provocative, you can ask them to pair up with you and co-submit.
And in general:
Save what you write.
Look at the talk submission form and figure out what they'll want, but write your submission and save it somewhere else. This makes it easier to reuse your proposal for multiple conferences, and to have multiple proposals to submit to any one conference.
Submit two or three proposals to each conference.
If I want to speak at a conference, I usually submit multiple proposals on different topics. I have never heard of a conference that limits speakers to one submission per speaker. The first time it occurred to me to do this: 2009, when I saw that the Open Source Bridge submission system lets everyone see everyone else's submission. I saw that some men were submitting three, four, even five proposals. Oh, you can do that! I submitted three talks, and one got in.
For each of those proposals:
Specify the audience and the takeaways.
Some conference submission forms explicitly ask who the audience is for your talk, or what prerequisites you think they should have. Also, some ask for the objectives of your talk, or what the audience will take away. This is a great question. Give specific answers. Even if the conference doesn't ask for these things, I suggest you include them anyway -- put them in the abstract, additional info field, detailed description, or similar.
Two examples, from talks I successfully submitted. Example 1:
Audience: Developers with at least enough web development experience that they've used GET and POST, but who are unfamiliar with using DELETE or conditional GET
Objectives: Attendees will learn about HTTP 1.1 verbs, headers, response codes, and capabilities they did not know of before, with use cases, example code, and jokes. They will walk away feeling more capable of using more of the HTTP featureset and with a greater understanding of the underlying design of the protocol.
Takeaway: Open source contributors and leaders who are already comfortable with our norms and jargon will learn how to see their own phrasings and tools as outsiders do, and to make more hospitable experiences during their outreach efforts.
Prereqs: Attendees need to already be able to use core open source tools (like version control, IRC, mailing lists, bug trackers, and wikis), and be familiar with general open source culture and trends such as "+1", scorn towards Microsoft Windows, and the argument between copyleft and permissive licenses.
And in each proposal:
Provide a detailed outline.
Be concise in the short-form description, and then write out a big outline for the "more details" or longer abstract. For 45-minute talks I've provided detailed outlines/abstracts of 300-1200 words. The committee can easily infer that you have already thought a lot about this topic and are pretty prepped, and this that there's less risk of you giving a half-baked or rambling talk.
Finally, three more logistical tips:
Get the proposals in on time.
You know yourself. You have probably figured out how to avoid missing appointments and deadlines. Do that. Set up phone alarms, calendar reminders, a boomerang'd email, a Twitter feed subscription, a promise to an accountability buddy, what have you.
Publish a "My past talks" page.
Here's mine. List your past talks, including workshops and classes you've led and podcasts or interviews that feature you. Where possible, link to or embed sample video and audio so a conference can try before they buy. If you can, name the conference, place, and year for each one; in some cases it might make more sense to say "led weekly brownbag talk series, Name of Employer, 2010-2012" or similar. A longer public speaking résumé means you're less of a risk.
Customize your bio.
A good biographical paragraph establishes your credibility and likelihood to speak about the subject and say interesting things. For instance, I've taught interactive workshops before, and I've won an Open Source Citizen Award. Sometimes I put this kind of thing in my bio, sometimes in an "additional info" field. This is one place that a comprehensive "My past talks" page comes in handy. I can cut and paste stuff from that page into a customized bio that demonstrates why I'm a great choice to give this talk.
That's what I know so far.
I add my voice here to the multitude of resources on this topic, and especially commend to you Lena Reinhard's excellent and comprehensive talk prep guide and the weekly Technically Speaking email newsletter.
Thanks to Christie Koehler for the conversation that sparked this post!
# 21 Mar 2016, 04:58PM: What Is Maintainership?:
Yesterday, at my first LibrePlanet conference, I delivered a somewhat impromptu five-minute lightning talk, "What is maintainership? Or, approaches to filling management skill gaps in free software". I spoke without a script, and what follows is what I meant to say; I hope it bears a strong resemblance to what I actually said. I do not know whether any video of this session will appear online; if it does, I'll update this entry.
What is Maintainership?
Or, approaches to filling management skill gaps in free software
Sumana Harihareswara, Changeset Consulting
LibrePlanet, Cambridge, MA, 20 March 2016
Why do we have maintainers in free software projects? There are various different explanations you can use, and they affect how you do the job of maintainer, how you treat maintainers, how and whether you recruit and mentor them, and so on.
So here are three -- they aren't the only ways people think about maintainership, but these are three I have noticed, and I have given them alliterative names to make it easier to think about and remember them.
Sad: This is a narrative where even having maintainers is, fundamentally, an admission of failure. Jefferson said a lot of BS, but one thing he said that wasn't was: "If men were angels, we would have no need of government." And if every contributor contributed equally to bug triage, release management, communication, and so on, then we wouldn't need to delegate that responsibility to someone, to a maintainer. But it's not like that, so we do. It's an approach to preventing the Tragedy of the Commons.
I am not saying that this approach is wrong. It's totally legitimate if this is how you are thinking about maintainership. But it's going to affect how your community does it, so, just be aware.
Skill: This approach says, well, people want to grow their skills. This is really natural. People want to get better, they want to achieve mastery, and they want validation of their mastery, they want other people to respect their mastery. And the skill of being a maintainer, it's a skill, or a set of skills, around release management, communication, writing, leadership, and so on. And if it's a skill, then you can learn it. We can mentor new maintainers, teach them the skills they need.
So in this approach, people might have ambition to be maintainers. And ambition is not a dirty word. As Dr. Anna Fels puts it in her book Necessary Dreams, ambition is the combination of the urge to achieve mastery of some domain and the desire to have your peers, or people you admire, acknowledge, recognize, validate your mastery.
With this skills approach, we say, yeah, it's natural that some people have ambition to get better as developers and also to get better at the skills involved in being a maintainer, and we create pathways for that.
Sustain: OK, now we're talking about the economics of free software, how it gets sustained. If we're talking about economics, then we're talking about suppply and demand. And I believe that, in free software right now, there is an oversupply of developers who want to write feature code, relative to an undersupply of people with the temperament and skills and desire to do everything else that needs doing, to get free software polished and usable and delivered and making a difference. This is because of a lot of factors, who we've kept out and who got drawn into the community over the years, but anyway, it means we don't have enough people who currently have the skill and interest and time to do tasks that maintainers do.
But we have all these companies, right? Companies that depend on, that are built on free software infrastructure. How can those with more money than time help solve this problem?
[insert Changeset Consulting plug here]. You can hire my firm, Changeset Consulting, to do these tasks for a free software project you care about. Changeset Consulting can do bug triage, doc rewriting, user experience research, contributor outreach, release management, customer service, and basically all the tasks involved in maintainership except for the writing and reviewing of feature code, which is what those core developers want to be doing anyway. It's maintainer-as-a-service.
Of course you don't have to hire me. But it is worth thinking about what needs to be done, and disaggregating it and seeing what bits companies can pay for, to help sustain the free software ecology they depend on.
So: sad, skill, sustain. I hope thinking about what approach you are taking helps your project think about maintainership, and what it needs to do to make the biggest long-term impact on software freedom. Thank you.
# 19 Feb 2016, 06:50PM: What Should We Stop Doing? (FLOSS Community Metrics Meeting keynote):
"What should we stop doing?": written version of a keynote address by Sumana Harihareswara, delivered at the FLOSS Community Metrics Meeting just before FOSDEM, 29 January 2016 in Brussels, Belgium. Slide deck is a 14-page PDF. Video is available. The notes I used when I delivered the talk were quite skeletal, so the talk I delivered varied substantially on the sentence level, but covered all the same points.
I'd like to start with a story, about my excellent boss I worked for when I was at the Wikimedia Foundation, Rob Lanphier, and what he told me when I'd been on the job about eight months. In one of our one-on-one meetings, I mentioned to him that I felt overwhelmed. And first, he told me that I'd been on the job less than a year, and it takes a year to ramp up fully in that job, so I shouldn't be too worried. And then he reminded me that we were in an amazing position, that we would hear and get all kinds of great ideas, but that in order to get anything done, we would have to focus. We'd have to learn to say, "That's a great idea, and we're not doing it." And say it often. And, he reminded me, I felt overwhelmed because I actually had the power to make choices, about what I did with my time, that would affect a lot of people. I was not just cog # 15,000 doing a super specialized task at Apple.
So today I want to talk with you about how to use the power you have, in your open source projects and organizations, and about saying no to a lot of things, so you can focus on doing fewer things well -- the Unix philosophy, right? I'll talk about a few tools and leave you with some questions.
Tool 1: Remember to say no to the lamppost fallacy
The lamppost fallacy is an old one, and the story goes that a drunk guy says, "I dropped my keys, will you help me look for them?" "OK, sure. Where'd you drop them?" "Under that tree." "So why are you looking for them under this lamppost?" "Well, the light is better here."
A. Quantitative vs qualitative in the dev data
The first place we ought to check for the lamppost fallacy is in overvaluing quantitative metrics over qualitative analysis when looking at developer workflow and experience. Dave Neary said, in the FLOSSMetrics meeting in 2014, in "What you measure is what you get. Stories of metrics gone wrong": Use qualitative and quantitative analysis to interpret metrics.
When it comes to developer experience, you can be analytical while both quantitative and qualitative. And you rather have to be, because as soon as you start uncovering numbers, you start asking why they are what they are and what could be done to change that, and that's where the qualitative analytical approach comes in.
Qualitative is still analytical! Camille Fournier's post, "Qualitative or quantitative but always analytical", goes into this:
qualitative is still analytical. You may not be able to use data-driven reasoning because you're starting something new, and there are no numbers. It is hard to do quantitative analysis without data, and new things only have secondary data about potential and markets, they do not have primary data about the actual user engagement with the unbuilt product that you can measure. Furthermore, even when the thing is released, you probably have nothing but "small" data for a while. If you only have a thousand people engaging with something, it is hard to do interesting and statistically significant A/B tests unless you change things drastically and cause massive behavioral changes.
This is applicable to developer experience as well!
For help, I recommend the Wikimedia movement's Grants Evaluation & Learning team's table discussing quantitative and qualitative approaches you can take: ethnography, case studies, participant observation, and so on. To deepen understanding. It's complementary with the quantitative side, which is about generalizing findings.
B. Quantifiable dev artifacts-and-process data versus data about everything else
Another place to check for the lamppost fallacy is in overvaluing quantifiable data about programming artifacts and process over all sorts of data about everything else that matters about your project. Earlier today, Jesus González-Barahona mentioned the many communities -- dev, contributor, user, larger ecosystem -- that you might want to research. There's lots of easily quantifiable data about development, yes, but what is actually important to your project? Dev, user, sysadmin, larger ecology -- all of these might be, honestly, more important to the success of your mission. And we also know some things about how to get better at getting user data.
For help, I recommend the Simply Secure guides on doing qualitative UX research, such as seeing how users are using your product/application. And I recommend you read existing research on software engineering, like the findings in Making Software: What Really Works and Why We Believe It, the O'Reilly book edited by Andy Oram and Greg Wilson.
Tool 2: know what kind of assessment you're trying to do and how it plays into your theory of change
Another really important tool that will help you say no to some things and yes to others is knowing what kind of assessment you're trying to make, and how that plays into your hypothesis, your theory of change.
I'm going to mess this up compared to a serious education researcher, but it's worth knowing the basics of the difference between formative and summative assessments.
Formative assessment or evaluation is diagnostic, and you should use it iteratively to make better decisions to help students learn with better instruction & processes.
Summative assessment is checking outcomes at the conclusion of an exercise or a course, often for accountability, and judging the worth/value of that educational intervention. In our context as open source community managers, this often means that this data is used to persuade bosses & community that we're doing a good job or that someone else is doing a bad job.
As Dawn Foster last year said in her "Your Metrics Strategy" speech at the FLOSSMetrics meeting:
METRICS ARE USEFUL Measure progress, spot trends and recognize contributors.
Start with goals: WHY FOCUS ON GOALS? Avoid a mess: measure the right things, encourage good behavior.
Here's Ioana Chiorean, FLOSS Community Metrics meeting, January 30th 2015, "How metrics motivate":
Measure the right things... specific goals that will contribute to your organization's success
Dave Neary in 2014 in "What you measure is what you get. Stories of metrics gone wrong" at the Metrics meeting said:
be careful what you measure: metrics create incentives
Focus on business and community's success measurements
And this is tough. Because it can be hard to really make a space for truly formative assessment, especially if you are doing everything transparently, because as soon as you gather and publish any data, people will use it to argue that we ought to make drastic changes, not just iterative changes. But it might help to remember what you are truly aiming at, what kind of evaluation you really mean to be doing.
And it helps a lot to know your Theory of Change. You have an assessment of the way the world is, a vision of how you want the world to look, and a hypothesis about some change you could make, an activity or intervention you could perform to move us closer from A to B.
There's a chicken and egg problem here. How do you form the hypothesis without doing some initial measurement? And my perhaps subversive answer is, use ideas from other communities and research to create a hypothesis, and then set up some experiments to check it. Or go with your gut, your instinct about what the hypothesis is, and be ready to discard it if the data does not bear it out.
For help: Check out educational psychology, such as cognitive apprenticeship theory - Mel Chua's presentation here gives you the basics. You might also check out the Program/Grant Learning & Evaluation findings from Wikimedia, and try out how the "pirate metrics" funnel -- Acquisition, Activation, Retention, Referral, Revenue, or AARRR -- fits with your community's needs and bottlenecks.
Tool 3: if something doesn't work, acknowledge it
And the third tool is that when we see data saying that something does not work, we need to have the courage to acknowledge what the data is saying. You can move the goalposts, or you can say no and cause some temporary pain. We have to be willing to take bug reports.
Here's an example. The Wikimedia movement likes to host editathons, where a bunch of people get together and learn to edit Wikipedia together. We hoped that would be a way to train and retain new editors. But Wikipedia editathons don't produce new long-term editors. We learned:
About 52% of participants identified as new users made at least one edit one month after their event, but the percentage editing dropped to 15% in the sixth months after their event
And, in "What we learned from the English Wikipedia new editor pilot in the Philippines":
Inviting contribution by surfacing geo-targeted article stubs was not enough to motivate or help users to make their first edits to an article. Together, all new editors who joined made only six edits in total to the article space during this experiment, and they made no edits to the articles we suggested.
Providing suggestions via links to places users might go for help did not appear to sufficiently support or motivate these new editors to get involved. 50 percent of those surveyed later said they didn’t look for help pages. Those who did view help pages nevertheless did not edit the suggested articles.
But over and over in the Wikimedia movement I see that we keep hosting those one-off editathons. And they do work to, for instance, add new high-quality content about the topics they focus on, and some people really like them as parties and morale boosters, and I've heard the argument that they at least get a lot of people through that first step, of creating an account and making their first edit. But that does not mean that they're things we should be spending time on, to reverse the editor decline trend. We need to be honest about that.
It can be hard to give up things we like doing, things we think are good ideas and that ought to work. As an example: I am very much in favor of mentorship and apprenticeship programs in open source, like Google Summer of Code and Outreachy. Recently some researchers, Adriaan Labuschagne and Reid Holmes, raised questions about mentorship programs in "Do Onboarding Programs Work?", published in 2015, about whether these kinds of mentorship programs move the needle enough in the long run, to bring new contributors in. It's not conclusive, but there are questions. And I need to pay attention to that kind of research and be willing to change my recommendations based on what actually works.
We can run into cognitive dissonance if we realize that we did something that wasn't actually effective. Why did I do this thing? why did we do this thing? There's an urge to rationalize it. The Wikimedia FailFest & Learning Pattern hackathon 2015 recommends that we try framing our stories about our past mistakes to avoid that temptation.
Big 'F' failure framing:
- We planned this thing: __________________________
- This is how we knew it wasn't working: __________________________
- There might have been some issues with our assumption that: __________________________
- If we tried it again, we might change: __________________________
Little 'f' failure framing:
- We planned this thing: __________________________
- This is how we knew it wasn't working: __________________________
- We think that this went wrong: __________________________
- Here is how to fix it: __________________________
For help with this tool, I suggest reading existing research evaluating what works in FLOSS and open culture, like "Measuring Engagement: Recommendations from Audit and Analytics" by David Eaves, Adam Lofting, Pierros Papadeas, Peter Loewen of Mozilla.
I have a much larger question to leave you with.
One trend I see underlying a big chunk of FLOSS metrics work is the desire to automate the emotional labor involved in maintainership, like figuring out how our fellow contributors are doing, making choices about where to spend mentorship time, and tracking a community's emotional tenor. But is that appropriate? What if we switched our assumptions around and used our metrics to figure out what we're spending time on more generally, and tried to find low-value programming work we could stop doing? What tools would support this, and what scenarios could play out?
This is a huge question and I have barely scratched the surface, but I would love to hear your thoughts. Thank you.
Sumana Harihareswara, Changeset Consulting
# 16 Sep 2015, 01:03PM: Software In Person:
In February, while coworking at the Open Internet Tools Project, I got to talking with Gus Andrews about face-to-face tech events. Specifically, when distributed people who make software together have a chance to get together in person, how can we best use that time? Gus took a bunch of notes on my thoughts, and gave me a copy.
Starting with those, I've written a piece that Model View Culture has published today: "Software In Person".
Distributed software-making organizations (companies, open source projects, etc.) generally make time to get people together, face-to-face. I know; I've organized or run hackathons, sprints, summits, and all-hands meetings for open source projects and businesses (and if I never have to worry about someone else's hotel or visa again, it'll be too soon).
Engineers often assume we don't need to explicitly structure that time together, or default to holding an unconference. This refusal to reflect on users' needs (in this case, the participants in the event) is lazy management. Or event organizers fall back to creating conferences like the ones we usually see in tech, where elite men give hour-long lectures, and most participants don't have any opportunities to collaborate or assess their skills. Still a bad user experience, and a waste of your precious in-person time.
Why do you think you're spending hundreds of thousands of dollars holding hackathons, sprint weeks, and conferences? And how could you be using that time and money better?
Subsections include "Our defaults", "Investing for the long term", "Beyond 'hack a lot'", "Grow your people", and "Setting yourself up for success". Thanks to Gus and to Model View Culture for helping me make this happen!
# 09 Aug 2015, 10:52PM: How To Improve Bus Factor In Your Open Source Project:
Someone in one of my communities was wondering whether we ought to build a new automated tool to give little tasks to newcomers and thus help them turn into future maintainers. I have edited my replies to him into the How To Build Bus Factor For Your Open Source Project explanation below.
In my experience (I was an open source community manager for several years and am deeply embedded in the community of people who do open source outreach), getting people into the funnel for your project as first-time contributors is a reasonably well-solved problem, i.e., we know what works. Showing up at OpenHatch events, making sure the bugs in the bug tracker are well-specified, setting up a "good for first-timers" task tag and/or webpage and keeping it updated, personally inviting people who have reported bugs to help you solve them, etc. If you can invest several months of one-on-one or two-on-one mentorship time, participate in Google Summer of Code and/or Outreachy internship programs. If you want to start with something that's quantitative and gamified, consider using Google Code-In as a scaffold to help you develop the rest of these practices.
You need to quickly thank and give useful feedback to people who are already contributing, even if that feedback will include criticism. A fast first review is key, and here's a study that backs that up. Slide 8: "Most significant barrier to engaging in onramping others is unclear communications and unfriendly community. Access to the right tools has some effect." Slide 26:
"Contributors who received code reviews within 48 hours on their first bug have an exceptionally high rate of returning and contributing. (And "Github, transparency, and the OTW Archive project" discusses how bad-to-nonexistent code review and bad release management led to a volunteer dropping out of a different open source project.)
Contributors who wait longer than 7 days for code review on their first bug have virtually zero percent likelihood of returning.
Showing a contributor the next bug they can work on dramatically improves the odds of contributing."
In my opinion, building bus factor for your project (growing new maintainers for the future) is also a solved problem, in that we know what works. You show up. You go to the unfashionable parts of our world where the cognitive surplus is -- community colleges, second- and third-tier four-year colleges, second- and third-tier tech hubs, boring enterprise companies. You review code and bug reports quickly, you think of every contributor (of any sort) as a potential co-maintainer, and you make friendly overtures to them and offer to mentor them. You follow OpenHatch's recommendations. You participate in Google Summer of Code and/or Outreachy internship programs.
Mentorship is a make-or-break step here. This is a key reason projects participate in internship programs like GSoC and Outreachy. For example, Angela Byron was a community college student who had never gotten involved in open source before, and then heard about GSoC. She thought "well it's an internship for students, it'll be okay if I make mistakes". That's how she got into Drupal. She's now a key Drupal maintainer.
Dreamwidth, an open source project, started with two maintainers. They specifically decided to make the hard decision to slow down on feature development, early on, and instead pay off technical debt and teach newcomers. Now they are a thriving, multimaintainer project. "dreamwidth as vindication of a few cherished theories" is perhaps one of my favorite pieces on how Dreamwidth did what it did. Also see "Teaching People to Fish" and this conference report.
Maintainers must review code, and that means that if you want someone to turn into a maintainer in your project, you must help them learn the skill of code review and you must help them get confident about vetoing and merging code. In my experience, yes, a good automated test suite does help people get more confident about merging changes in. But maintainers also need to teach candidates what their standards ought to be, and encourage them (many contributors' first thought when someone says "would you want to comaintain this project with me?" is "what? me? no! I'm not good enough!"). Here's a rough example training.
If you want more detailed ways to think about useful approaches and statistics, I recommend Mel Chua's intro to education psychology for hackers and several relevant chapters in Making Software: What Really Works and Why We Believe It, from O'Reilly, edited by Greg Wilson & Andy Oram. You'll be able to use OpenHub (formerly Ohloh) for basic stats/metrics on your open source project, including numbers of recent contributors. And if you want more statistics for your own project or for FLOSS in aggregate, the open source metrics working group would also be a good place to chat about this, to get a better sense of what's out there (in terms of dashboards and stats) and what's needed. (Since then: also see this post by Dawn Foster.)
We know how to do this. Open source projects that do it, that are patient with the human factor, do better, in the long run.
# 22 Apr 2015, 12:14PM: How Knowledge Workers Can Learn More About Open Source Tools They Use:
Yesterday I spent an hour teaching a woman whose nonprofit wants improvements to their current Drupal setup, especially around content approval workflow and localization. She wanted to understand more about how Drupal works so that she can understand the potential problems and solutions better, and be a better partner to her technical colleagues.
I talked with her a little about those specific questions, but most of what I taught her would be appropriate to any knowledge worker who wants to learn more about an open source web application. I pointed her to some resources and figured they were worth mentioning here as well.
- The Felder-Silverman engineering learning styles questionnaire. You knew I would do this. I am such a pusher. Whenever I hear someone talk about the frustrations they've had in learning how to bend software to their will, especially if they get self-blamey or overwhelmed with approaches and resources, I suggest they take this quiz. It's helped me and other people reduce self-blame and get more strategic.
- The English Wikipedia page about Drupal. Sometimes open source projects' websites are not, to use the church jargon, "seeker-sensitive." In those cases, Wikipedia often has good summaries to answer questions like "What's the latest stable version?" and "What are key terms I need to understand to look up more help?"
- The Freenode webchat service, so you can join an Internet Relay Chat channel without having to install new software. Most open source projects have live chat channels, where you can ask questions, on the Freenode IRC network. You can make up a nickname -- it's not permanent -- and join, for instance, the channel drupal-support (guide to using IRC politely). Thanks to eevensen and ciss in that channel yesterday for tips:
[15:31] nyplguest: I'm starting to get into using Drupal - what's the best intro glossary/document to help me understand the vocab, like blocks and views? (I'm used to another system)
[15:34] eevensen: @nyplguest I recommend
[15:35] ciss: nyplguest: https://www.drupal.org/glossary
[15:36] nyplguest: Thank you ciss!
[15:37] nyplguest: Thank you eevensen as well!
- The NYC Drupal group, which in the past has run a Drupal Ladder series of events to teach and train new contributors. (I know of Drupal Ladder mostly because my pal Fureigh led Drupal Ladder in NYC and gave an Open Source Bridge talk about it.)
- The new Wikimedia content translation tool that makes it easier for you to translate articles. Maybe your website can do something similar.
- The "workflow" Drupal group, which looks like a place you can ask how to set up the workflow and content approval process you want.
- Some things I learned about domain names and hosting, and things I learned about Drupal. This included discussion of:
- "The Five Stages of Hosting" (e.g., dorm room versus condo). Such a useful analogy.
- DigitalOcean, the "dorm room"-type provider I use. It's been a good deal for what I've needed, namely, a test server that I can blow away at the slightest provocation. https://www.digitalocean.com/?refcode=82e7b02dea11 is a referral link to get a $10 credit at signup (that's 2 months' worth of service at the $5/month plan).
Since she may end up with a test server so she can play with Drupal modules and configuration, I also talked with her a bit about what it means to ssh into a server, the fact that she will probably have to install new software (a console or terminal application) on her Windows computer in order to do that, and the basics of how public key infrastructure and SSH keypairs work, and why they're more secure than just using a username and password. I did this without notes or links, so I don't have any to offer here; perhaps you have a favorite explanation you'll share in the comments?
Overall in these kinds of conversations I refrain from saying "do this" or "do that", but I did share these two bits of wisdom:
- When you generate a keypair, the .pub file is the one to give other people, and the other one you keep to yourself.
- Make an effort to remember that passphrase. Otherwise you will be unable to use your key, and you have to have a slightly embarrassing conversation where you say "here's the new .pub because I forgot my passphrase for the old one," and it delays whatever you were going to do. But I showed her my ~/.ssh directory with all those old keys I can no longer access, and told her that if she does end up needing to make a new keypair, she is in good company, and basically everyone with an SSH key has gone through this at least once.
We talked about getting her a community of practice so she could have more people to learn from. She now knows of the local Drupal group and of some get-togethers of technologists in her professional community. And she has some starting points so she can ask more productive questions of the technologists within her org.
And this stuff is frustrating, and if you feel that way, that's okay; lots of other people feel that way too, and maybe it just means you need to try a new approach.
# (1) 25 Dec 2014, 12:48AM: Good And Bad Signs For Community Change, And Some Leadership Styles:
So let's assume you want to improve a particular community, and you've already read my earlier pieces, which I am now declaring prerequisites: "Why You Have To Fix Governance To Improve Hospitality", "Hospitality, Jerks, and What I Learned", and "Learn Tech Management in 45 Minutes" (all the way through the Q&A). And let's assume that you care about the community having a good pathway to inclusion, and that the community is caring or collaborative, rather than cordial, competitive, or combative.
When I look at an open stuff community, here are some factors that
make me optimistic:
- people with social capital in the project, whom other participants respect, support my goals in private conversation
- even better: such people have reached out to me, of their own
initiative, about it
- even better than that: such people are already taking real action
- I have personal relationships with at least one influential project leader
- I am in the private spaces where project leaders talk
- either the project's still new and the norms are in flux, or there's a new initiative or subcommunity where I can influence norms or even amend the rules of the game before they jell and harden
- the founder of the project exercises charismatic/inertial authority and either does not support my goals, or is too afraid of conflict to take real action
- per Selena Deckelmann's advice, "If someone is treating you with contempt, or you are using contempt in arguments, that's a big warning sign."
- there is a private space where important conversation happens and I'm not invited
- I, or someone else who shares my goals, has been unsuccessful in
getting the community to do something small towards my goals. For instance, assuming my goal is improving gender diversity in a male-dominated workplace, I haven't been able to get them to adopt a first code of conduct, or improve a CoC to have real enforcement provisions, or participate in a women-centric job fair, or make a token effort towards diversity in guest speakers.
- not just the rules of the game, but the dominant worldview, and perhaps the major actors, haven't changed in, say, more than three years
To achieve change in this kind of situation, you have to have enough social skills to be able to make relationships, to notice whether contempt has made an appearance, to grok the subtle stuff. A systems approach (leader as engineer) will get you part of the analysis and part of the solution; you also need relatedness (leader as mother). Requisite variety. In the face of a problem, some people reflexively reach more for "make a process that scales" and some for "have a conversation with ____"; perhaps this is the defining difference between introverts and extroverts, or maybe between geeks and nongeeks, in the workplace.* We need both, of course - scale and empathy.
A huge part of my job for the last four years was struggling with the question: how do you inculcate empathy in others, at scale, remotely? How do you you balance genuine openness to new people, including people who think very differently from you, with the need for norms and governance and, at times, exclusion?
Huh, I wonder whether this is the first blog entry I've ever tagged both with "Management and Leadership" and "Religion".
# (1) 21 Dec 2014, 11:10PM: Why You Have To Fix Governance To Improve Hospitality:
Fundamentally, if you want to make a community hospitable,* you need to work not just on individual rules of conduct, but on governance. This is because
- the particular people implementing rules of conduct will use their judgment in when, whether, and how to apply those rules, and
- you may need to go a few levels up and change not just who's implementing rules, but who's allowed to make rules in the first place
Wait, how does that work?
In my Wiki Conference 2014 keynote address (available in text, audio, and video), and in my PyCon 2014 poster about Hacker School, I discuss how to make your community hospitable. In those pieces I also mention how the gatekeeping (there is an initiation/selection process) and the paid labor of community managers (the facilitators) at Hacker School help prevent or mitigate bad behavior. And, of course, the Hacker School user manual is the canonical document about what is desired and prohibited at Hacker School; "Subtle -isms at Hacker School" and "Negative comments" have more ruminations on how certain kinds of negativity create a bad learning environment.
Sometimes it's the little stuff, more subtle than the booth babe/groping/assault/slur kind of stuff, that makes a community feel inhospitable to me. When I say "little stuff" I am trying to describe the small ways people marginalize each other but that I did not experience at Hacker School and thus that I noticed more after my sabbatical at Hacker School: dominance displays, cruelty in the guise of honesty, the use of power in inhospitable ways, feeling unvalued, "jokes", clubbiness, watching my every public action for ungenerous interpretation, nitpicking, and bad faith.
You can try to make rules about how things ought to be, about what is allowed and not, but members of the incumbent/dominant group are less accustomed to monitoring their own behavior, as the Onlinesmanship wiki (for community moderators) reminds us:
Another pattern of the privileged: not keeping track of the line between acceptable and unacceptable behavior. They only know they've crossed the line when someone in authority tells them so. If this doesn't happen, their behavior stays bad or gets worse....
Do not argue about their intentions. They'll swear they meant no harm, then sulk like fury because you even suggested it. In most cases they'll be telling the truth: the possibility that they were giving offense never crossed their minds. Neither did any other scenario, because unlike real adults, they take no responsibility for getting along with others. The idea that in a cooperative work situation, getting along with one's fellow employees is part of the job, is not in their worldview.
This too is a function of privilege. They assume they won't get hit with full penalties for their first offense (or half-dozen offenses), and that other people will always take on the work of tracking their behavior, warning them when they go over the line, and explaining over and over again what they should have done and why. It's the flip side of the way people of the marked state get hit with premature negative judgements (stupid, dishonest, sneaky, hysterically oversensitive) on the basis of little or no evidence.
And, in any community, rules often get much more leniently interpreted for members of the dominant group. And this is even harder to fight against when influential people believe that no marginalization is taking place; as Abi Sutherland articulates: "The problem with being lower on an unstated social hierarchy is that marginal judgment calls will
reliably go against you. It's an excusable form of reinforcement."**
Changing individual rules isn't enough. After all, individual rules get made by particular humans, who -- here, instead of babbling about social rule system theory at you, I'll give you a sort of sidebar about three successive levels of governance, courtesy of my bachelor's degree in political science:***
- Actors: The actual set of people who run an organization or who shape agendas, on any given day, have particular ideas and policies and try to get certain things done. They implement and set and change regulations. Actors turn over pretty fast.
- For example, in its five-year history, Hacker School has had employees come and go, and new participants have become influential alumni.
- Dominant worldviews: More deeply and less ephemerally, the general worldview of the group of people who have power and influence (e.g., Democrats in the executive branch of the US government, sexists in mass media, surgeons in operating rooms, deletionists on English Wikipedia) determines what's desirable and what's possible in the long term. Churn is slower on this level.
- For example, dominant worldviews among Hacker Schoolers**** include: diversity of Hacker Schoolers, on several axes, helps everyone learn more. Hiding your work, impostor syndrome, too much task-switching, and the extrinsic motivation of job-hunting are common problems that reduce the chances of Hacker Schoolers' success. Careers in the tech industry are, on balance, desirable.
- Rules of the game: What is sacred? What is so core to our identity, our values, that breaking one of these means you're not one of us? The rules of the game (e.g., how we choose leaders, what the rulers' jurisdiction is) confer legitimacy on the whole process. Breaking these rules is heresy and amending them is very hard and controversial.***** Publicly disagreeing with the rules of the game costs lots of political capital.
- For example, the rules of the game among Hacker Schoolers, as I see them, include: the founders of Hacker School and their employees have legitimate authority over admissions, hiring, and rule enforcement. Hacker School is (moneywise) free to attend. Admission is selective. A well-designed environment that helps people do the right thing automatically is better than one-on-one persuasion, which is still better than coercion.
(Where do the four Hacker School social rules fall in this framework? I don't know. Hacker School's founders encourage an experimental spirit, and I think they would rather stay fluid than accrete more and more sacred texts. But, as more and more participants have experienced a Hacker School with the four social rules as currently constituted, I bet a ton of my peers perceive the social rules as DNA at this point, inherent and permanent. I'm not far from that myself.)
(I regret that I don't have the citation to hand, and would welcome the name of the theorists who created this model.)
So, if you want a hospitable community, it's not enough to set up a code of conduct; a CoC can't substitute for culture. Assuming you're working with a pre-existing condition, you have to assess the existing power structures and see where you have leverage, so you can articulate and advocate new worldviews, and maybe even move to amend the rules of the game.
How do you start? This post has already gotten huge, so, I'll talk about that next time.
* I assume that we can't optimize every community or activity for hospitality and learning. Every collaborative effort has to balance execution and alignment; once in a while, people who have already attained mastery of skill x just need to mind-meld to get something done. But if we want to attract, retain, and grow people, we need to always consider the pathway to inclusion. And that means, when we accept behavior or norms that make it harder for people to learn, we should know that we're doing it, and ask whether that's what we want. We should check.
**See the second half of "One Way Confidence Will Look" for more on the unwillingness to see bias.
*** I am quite grateful for my political science background -- not least because I learned that socially constructed things are real too, which many computer science-focused people in my field seem to have missed, which means they can't mod or make new social constructs as easily. Requisite variety.
**** A non-comprehensive list, of course. And I don't feel equal to the more nuanced question: what beliefs do the most influential Hacker Schoolers hold, especially on topics where their worldview is substantially different from their peers'?
***** The US has a very demanding procedure for amending the Constitution. India doesn't. The US has had 27 amendments in 227 years; India, 98 in 67 years. I don't know how to interpret that.
# (1) 10 Aug 2014, 07:34PM: Resources For Starting Your Own Thing:
I've had two different conversations recently with feminist women who want to start their own tech startups. Even though I have never done that, it turns out that I had things to tell them that they did not already know! NON-ORDERED LIST TIME!
I'm sure this is as incomplete as "Here Are Some Grants You Could Apply For" was. Also, as I mentioned, I totally have not done this and websearching around for startup advice from founders will get you a zillion interesting results, and if they contradict me then you should probably believe them instead.
- If you're thinking of starting your own company or nonprofit, check out the books at Anti 9-to-5 Guide (thanks for the rec, Fureigh) and the resources Kronda Adair mentioned at her Open Source Bridge talk "Stop Crying in the Bathroom and Start Your Own Business". If you're specifically thinking about a for-profit product-or-service startup: my friend Rachel Chalmers, a venture capitalist (someone who invests early money into a startup), wrote about why you should be wary of venture capital(ists). You're welcome to reach out to her to pitch your idea. Also read the cautionary words about investor storytime in "The Internet with a Human Face", and some thoughts about less efficient startups.
- If you want to start your organization in order to cause change in the world, have a theory of change. I love that Open Tech Fund tells you what their theory of change is (see the last sentence in their funding model). The Ada Initiative's change strategy is in its FAQ. Here's a sample exercise you can do, courtesy of Wikimedia Foundation's Learning and Evaluation team.
- Remember that you could be a social enterprise -- a mission-driven for-profit company, like Etsy, Growstuff, or Dreamwidth. (Skud, founder of Growstuff, maintains the Growstuff blog and you can, for instance, see a snapshot of its finances.)
- If you are specifically looking to start an organization as a means of increasing diversity in tech, read "Trying to get paid to work on diversity in tech? Read this" and "The Ada Initiative Founders on Funding Activism for Women in Open Source". Consider your theory of change, and look at who's already trying out the method you're thinking about (bootcamps, apprenticeships, online tools, recruiting/hiring arbitrage, after-school programs, training allies, convenings, curriculum change...). And -- as Jessica McKellar entreats us -- once you start trying things, measure what you're doing, so you know whether you're effective.
- If you want to make a product or service that specifically helps people who have mental illnesses, you're not alone -- for instance, at least one person is "Designing an ADHD-friendly to-do app" -- and there's certainly a market there -- for instance, the Compassionate Language Learner, who has depression, uses Lift. And there's a curb cut principle here, where making something that helps reduce anxiety and enhance executive function can help a lot of users, neurodiverse and neurotypical both. One could look at Graze, ZocDoc, and Fancy Hands as models here.
- If you need to improve your own programming skills in order to found effectively, check out the Felder-Silverman learning styles and use your self-assessment to help you choose useful learning activities. If you're trying to choose and stick with learning projects, you may find my piece "From 'sit still' to 'scratch your own itch'" helpful. Maybe you'd enjoy making funny or feminist things. If you've already programmed a bit before, try porting something you already made into a new language. Go ahead and copy existing things that you think are cool, e.g., Hollaback, Listen to Wikipedia. This is learning time and it is OK not to make new things the world needs. You can learn and then build the thing you want to exist. (This helps us see why games are popular learning projects: you have a ready-made specification to work from, so you don't need to decide "how should this work?", and they make people feel happy.)
- If you have not done executive-y things before, check out my Open Source Bridge talk "Learn Tech Management In 45 Minutes".
- If you have, without knowing it, been waiting for someone to give you permission to do this: I give you permission. (No kidding, I said this to one of them, because she realized she needed it. Permission granted!)
# (4) 30 Jul 2014, 11:47AM: Here Are Some Grants You Could Apply For:
When I tell people about grants they could get to help them work on open source/open culture stuff, sometimes they are surprised because they didn't know such grants existed. Here are some of them!
Grants with deadlines:
Grants that you can apply for anytime:
- Urgent: August 1st is the deadline for the Knight Prototype grant which "helps media makers, technologists and tinkerers take ideas from concept to demo. With grants of $35,000, innovators are given six months to research, test core assumptions and iterate before building out an entire project."
- Also coming up fast: August 4th is your deadline to apply for the Open Society Fellowship, which gives you about USD$80,000-100,000 to work on a project for a year.
- September 30th is the deadline for Individual Engagement Grants applications. IEG projects "support Wikimedians to complete projects that benefit the Wikimedia movement. Our focus is on experimentation for online impact. We fund individuals or small teams to organize, build, create, research or facilitate something that enhances the work of Wikimedia's volunteers." The maximum grant request is USD$30,000.
- If you're a woman working on a tech project that will benefit girls and women in tech, check out The Anita Borg Systers Pass-It-On (PIO) Awards, which range from USD$500-$1000. The next round opens for applications on August 6th.
- It looks like November 2014 is the deadline to apply for the Drupal Community Cultivation Grants: "to support current and future organizers and leaders of DrupalCamps, Drupal Meetups, Drupal Sprints, Drupal coalitions, and other creative projects that are spreading information within the Drupal community and educating individuals outside the community about Drupal... Grant awards will range from several hundred to several thousand dollars per project".
- Wikimedia project and event grants, which "support organizations, groups, and individuals to undertake innovative, mission-aligned projects that benefit the Wikimedia movement." Grants usually vary from USD$500-50,000.
- Mozilla makes grants ranging from USD$1,000-300,000 "to people and organizations we know, who are either working with us or in a closely related field" (specifically: Learning & Webmaking; Open Source Technology; User Sovereignty; Free Culture & Community).
- "The Python Software Foundation welcomes grant proposals for projects related to the development of Python, Python-related technology, educational programs and resources." It looks like they've granted amounts from about USD$500-10,000 in the past. If you want to run a Python-related hackfest/sprint, there's money for that too, to help with food, venue, and so on, for up to USD$300.
- The Sunlight Foundation offers grants USD$5,000-10,000 to open source projects that "make government more open and accessible".
- The Open Technology Fund makes grants "to support innovative efforts and new ideas from individuals and organizations globally defending freedom of expression online" and basically considers new "concept notes" (lightweight proposals) every two months. They are interested in making grants around USD$75,000-500,000.
- Wikimedia's "Travel & Participation Support funds Wikimedians to actively represent Wikimedia at events around the world." I believe most grants are for a few hundred or a few thousand dollars, to cover "travel, accommodation and incidental expenses." Many Wikimedia-specific events have their own scholarship programs as well to subsidize participation -- I know that a lot of open stuff events (e.g., PyCon, WisCon) also offer financial assistance in case you need it to get to the event.
- Edited (on August 4th) to add: TPF (The Perl Foundation) also offers grants for a variety of work that would benefit Perl in some way. TPF evaluates applications every two months, i.e., January, March, May, July, September and November. "Each grant is budgeted individually, according to the duration of the award, the recipient's financial needs, and projected expenses (travel, equipment, etc.) A typical amount for a 12-month grant involving some domestic US travel would be US$80,000." Past grants have been as low as a few hundred dollars.
This partially overlaps with the list that OpenHatch maintains on its wiki (and which I or someone else ought to update), and I have not even scratched the surface really. So anyway, yes, if you need some financial help to do better or more work in open stuff, take a look!
# 03 Jun 2014, 08:39AM: Choosing Older Or Younger Open Source Projects To Work On:
Larger, older open source projects have more people, more getting-started resources for new contributors, more name recognition, and sometimes more money to spend. (Examples: the Linux kernel, MediaWiki (the software behind Wikipedia, part of Wikimedia), Mozilla (the makers of Firefox), WordPress.)
Younger ones, with smaller contributor populations and smaller codebases, sometimes give new contributors more responsibility and power quickly, change faster in response to new ideas, and have more malleable culture -- and you can become one of the few World Experts in that technology more easily. (Examples: Tornado, ClojureScript, MetricsGrimoire, ThinkUp.)
So, while Mozilla, GNOME, Wikimedia, etc. have bigger budgets and more formal programs, and often have a larger worldwide impact, it could be that smaller and younger projects will give you more relative expertise faster. It's worth considering.
(You can use Ohloh to find open source projects on a particular topic, and see how many contributors they already have, and to compare projects. Take the statistics with a grain of salt, though; sometimes they're off.)
# (2) 26 Feb 2014, 07:10PM: Some Help for New Open Source People:
Wikimedia is participating in this year's Google Summer of Code internships and Outreach Program for Women. This week we are seeing a bunch of new folks try to learn how to navigate the world of open source, and I have some advice for you. Some of this ought to go into the Google Summer of Code student manual and the Open Advice collection.
"Doubt": Lots of GSoC candidates are from South Asia. Indians often say "Can you help resolve my doubts?" where US speakers would say "Can you help answer my questions?" "Doubt" and "question" are synonyms here; the Indians aren't implying suspicion.
How we talk: We talk in different places when we want to have different kinds of conversations. Each open source community has "a mailing list, a wiki, and an IRC channel.... a platform for discussion, storage for documentation and real-time communication." (I borrowed this explanation from the hackerspaces wiki.) An IRC channel is a constant waterfall of conversation and you aren't expected to be there all the time or catch everything. A mailing list is more like a slow-moving river, and a wiki changes slower, like a marsh.
Some people prefer for their IRC conversations to be more like mailing lists -- a long, publicly archived conversation where people can see what happened before and take part. Some people prefer for IRC chat to be more like Snapchat -- ephemeral, temporary, so it's easier to be vulnerable. No one agrees on what all of IRC should be. So the community within each channel has a certain culture and each channel can be different. Some channels allow or encourage public logging (example) so anyone can see what happened in the channel. Others don't. This difference is normal.
The rhythm of help: When you are learning how to contribute in open source, you're going to find that people give you links to pages that answer your questions. Here's how that usually goes:
This helps us make a balance between person-to-person discussion and documentation that everyone can read, so we save time answering common questions but also get everyone the personal help they need.
- you ask a question
- someone directs you to a document
- you go read that document, try to use it to answer your question
- you find you are confused about a new thing
- you ask another question
- now that you have shown that you have the ability to read, think, and learn new things, someone has a longer talk with you to answer your new specific question
- you and the other person collaborate to improve the document that you read in step 3 :-)
What's this project like?: Figuring out whether something's a good project for you is a skill and new folks don't have that skill yet. My friend Mel wrote a guide to how she checks out an open source project -- how she takes five minutes to look on their website for certain things, to see what kind of project it is. It's fine for you to look for projects where you already have friends, or where they have already set up easy tasks for beginners. We hope that in a year you'll be one of the people coming up with new ideas, organizing those easy tasks, and helping the beginners.
# (1) 12 May 2013, 09:49AM: Tips for New Summer Interns:
Three tips to help new Google Summer of Code applicants and interns, some of which all remote workers could stand to remember:
- Never let yourself get stuck on a technical question or problem for more than half an hour. Take a break, ask questions in IRC or a mailing list, find a technical book to read like
The Architecture of Open Source Applications, look at some other codebase to
see how they do it, eat a meal, or do something else, then come back to
- Never let yourself get stuck waiting for someone's reply for more
than 2 business days (Monday through Friday). Escalate -- ask your
mentor. If your mentor isn't helping, ask your org admin. If the org
admin isn't helping, ask on the GSoC discussion forum, or email Carol Smith.
- Ask yourself at the start of every day: what did I accomplish
yesterday? What will I try to do today? What are the obstacles I think I will run into? If you ask yourself those three questions and answer honestly -- especially if you let your mentor and team know the answers -- then you will prevent long delays and help keep your morale up.
# (1) 02 Jan 2012, 01:48PM: Self-Care, Sometimes On A Larger Scale:
I think some people I know might find Sam Starbuck's experience useful. He has social anxiety but wanted to leave the house more often, so he developed methods to cause himself to do so.
The idea originally was just to get out more; not even necessarily to have more experiences, but not to spend every single night at home. There's nothing wrong with that, in and of itself, but it wasn't what I wanted for me. So I developed the Adventur Programme.
I should say that I suspect the Adventur Programme would be different for everyone, because the key to doing it is finding something that will motivate you to actually follow through. Here's how I did it; the basic theme of all of this is to arrange things in such a way that making the decision to go isn't difficult....
Sam said that his plan
worked well. I think it's because it wasn't a resolution; it was a plan. Resolutions can be broken, and thus expose you to feelings of failure and despair. Whereas plans aren't broken. Plans are rescheduled for a later date. You haven't failed. You've just changed up your calendar a little.
I admire people and organizations that thoughtfully manage their sustainability. You can see Alexandra Erin develop this theme in her behind-the-scenes blogging; as a self-employed writer, she works as hard at developing her own infrastructure as she does at making fiction. For Sam, Alexandra, and me, the structure of a successful process must avoid causing feelings of failure and despair. We know that if we feel those, we'll stop. So we find patterns that suit our strengths and work around our weaknesses, and get us to our goals -- more adventures, more good fiction, better technical skills.
Maturity requires recognizing granite walls and finding workarounds, saying no to machismo.
We know from experience that counting only on unpaid volunteer effort to work on helping women in open technology and culture leads to burnout and inconsistency. So The Ada Initiative works as a nonprofit that pays two people's salaries to work fulltime on the issue. (I volunteer on their Advisory Board.)
In Notes on Nursing, Florence Nightingale wrote of management, "How can I provide for this right thing to be always done?" Even when she's not there? Nightingale focuses on executive energy, attention, and putting the proper processes into place such that patients have the resources and quiet they need to get better.
However, there is a habit of mind that scorns all visible processes (and sees no value in formal communication containers such as meetings or performance reviews). I was talking about this with Ari yesterday, about (for example) software developers who think source control is needless overhead. I imagine some of these folks have suffered from their own personal resource curse, coasting through day-to-day tasks, the accreted cruft not yet salient, atherosclerosis not yet completely blocking the bottleneck.
Some have the useful skill of translating to them, getting across why hygiene is important in some particular case. Sometimes I can do this with analogies. Others use diagrams. But by the time I'm working with someone, it's usually too late to inculcate in them that habit of mind, a critical respect of social infrastructure.
(If you can, try never to work for someone who has this blind spot.)
Like Sam, I'm also working on sustainability and process improvement in my personal life. For me, it's cleaning and housework. What can I do to make it more likely that I'll do my fair share? I already knew that podcasts help. As of last week, I've discovered that I am way better at doing the dishes if I do them first thing in the morning. With enough tips and tricks, maybe I can adequately simulate a good flatmate.
# (1) 29 Nov 2011, 09:39AM: Practices, And Practice:
A few months ago, I was talking with one of MediaWiki's summer interns in our IRC chatroom. He confessed that he had procrastinated on the work for his project and was rushing to finish it before the deadline. We had a chat that he thought other people might also find useful, in thinking about work habits and discipline.
I asked this Google Summer of Code student, do you know what caused the delays, so that you can account for them in future projects? and he replied, to be honest, procrastination & laziness. I know it's very shameful. I try many times to come out of this vicious circle but keep falling in it again and again.
I asked him whether he knew what works to combat his own procrastination and laziness. The most important thing is acknowledging one's problems and then fighting them. For example, for me, I have a suite of tactics that I use to combat my laziness & procrastination. What has worked, and what hasn't worked? Well, for me, for example, merely promising something to myself and making deadlines for myself doesn't help. But setting up a meeting with a peer to sprint -- even if we're working on completely different things! -- or promising a peer or a mentor that I will give them something to review by $time or $date helps.
He said, "motivation works but only for some time."
I replied: "what do you mean by 'motivation'? Merely telling yourself to increase your willpower? I think for most people that is unsustainable."
Another woman agreed with me: "motivation only works if it's a core part of you (and even then for me it's more the worry that other people will find me to not have that quality)." I sympathized with her.
I continued with more tips. For example, I also try to set very small TODO lists each day, because I find that the most important thing is getting started, and avoiding feeling intimidated and overwhelmed. Then once I have the momentum of a little work under my belt, the energy and interest of the work itself keeps me going and then I accomplish a lot.
"So, I know this advice is coming a little too late for you to use it for GSoC, but an accountability buddy program is great," I told him. If he hadn't had daily deliverables due to his mentor during GSoC, then the next time he could try that -- or a private accountability group blog with you & two friends, posting each day what you did, what you aim to do, how long it'll take, and auditing yourself. Instead of budgeting for 8 hours of work each day, I budget tasks that will take at most 6 hours, because I know other random stuff will come in and need doing urgently, and some tasks may take longer than I've estimated. This also helps on the "less intimidating TODO list" front.
We also discussed education; many colleges teach mostly theory, and a student who wants practice has to find it on her own. I said that there is always that balance of theory & implementation/practice. I told him that I wish I had been more brave and bold about experimentation when I was in college. It's just software; if it breaks then you can fix it. I was too timid. I pointed him to a Geek Feminism post of mine for some insight on my education regrets and hopes.
And, on the improvement that comes from working in a different environment, I gave an example: "Friday, I was having trouble doing work while sitting on the couch, so I sat on the floor with my back to the couch, and that helped! just a tiny change of position signalled to my unconscious that it was not relaxation time. For me, it can be as little as a different chair in the same room."
He was pretty grateful.
Him: now i know the power of honest revelations, i was looking for this from so long!
Me: so the trick is not being disciplined about work -- that is ineffective, exhausting, and dispiriting -- but being disciplined about the habit that tricks us into working. No learning is wasted. Take this for next time.
Him: sumanah: i would shower a million thanks if i could, you have striked the very core problem of mine n gave me very practical solution
Me: the best thanks you can give me is to continue to contribute to Wikimedia and to tell your friends these tips as well
Him: sumanah: yes, I will keep contributing to the best of my abilities
Him: now, I really feel that I am not the loner who does all that stuff!
Me: you are not alone.
Him: you should also blog a few lines like the tip you told me, it would help millions
Me: I will strongly consider that. Thanks.
I've edited the original log for easier legibility.
A line that others have found useful is "so the trick is not being disciplined about work -- that is ineffective, exhausting, and dispiriting -- but being disciplined about the habit that tricks us into working."
But the best part of that conversation, for me, was being able to tell someone, "you are not alone." That always makes a red-letter day.
# (1) 14 Jul 2009, 10:15AM: Obvious Tech Talk Q&A Prep:
A certain species of tech talk goes like: "Here's a product/methodology/tool I hack on, here's what it's good for and how/why you should add it to your toolkit." It's an honorable and useful presentation topic. As you prepare your talk, think about the questions your audience will have in the back of its head. If you can address them in the talk itself, great. If not, prepare answers for use in the questions-and-answers session.
- How do I get started using it?
- Why should I use this instead of the competition?
- Security implications?
- Performance implications? ("Yes, but does it scale?")
- Who's using this in real life?
- Where's the project going next? What do you need help with?
- What language is it written in?
- Why did you name it that?
The most important question is the one you hope no one asks because the answer is embarrassing. What would your smartest enemy ask?
(List developed while helping Youness practice his libnice talk last week.)
You can hire me through Changeset Consulting.
This work by Sumana Harihareswara is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available by emailing the author at email@example.com.