Tag Archives: General

A user story about user stories

Post Syndicated from esr original http://esr.ibiblio.org/?p=8720

The way I learned to use the term “user story”, back in the late 1990s at the beginnings of what is now called “agile programming”, was to describe a kind of roleplaying exercise in which you imagine a person and the person’s use case as a way of getting an outside perspective on the design, the documentation, and especially the UI of something you’re writing.

For example:

Meet Joe. He works for Randomcorp, who has a nasty huge old Subversion repository they want him to convert to Git. Joe is a recent grad who got thrown at the problem because he’s new on the job and his manager figures this is a good performance test in a place where the damage will be easily contained if he screws up. Joe himself doesn’t know this, but his teammates have figured it out.

Joe is smart and ambitious but has little experience with large projects yet. He knows there’s an open-source culture out there, but isn’t part of it – he’s thought about running Linux at home because the more senior geeks around him all seem to do that, but hasn’t found a good specific reason to jump yet. In truth most of what he does with his home machine is play games. He likes “Elite: Dangerous” and the Bioshock series.

Joe knows Git pretty well, mainly through the Tortoise GUI under Windows; he learned it in school. He has only used Subversion just enough to know basic commands. He found reposurgeon by doing web searches. Joe is fairly sure reposurgeon can do the job he needs and has told his boss this, but he has no idea where to start.

What does Joe’s discovery process looks like? Read the first two chapters of “Repository Editing with Reposurgeon” using Joe’s eyes. Is he going to hit this wall of text and bounce? If so, what could be done to make it more accessible? Is there some way to write a FAQ that would help him? If so, can we start listing the questions in the FAQ?

Joe has used gdb a little as part of a class assignment but has not otherwise seen programs with a CLI resembling reposurgeon’s. When he runs it, what is he likely to try to do first to get oriented? Is that going to help him feel like he knows what’s going on, or confuse him?

“Repository Editing…” says he ought to use repotool to set up a Makefile and stub scripts for the standard conversion workflow. What will Joe’s eyes tell him when he looks at the generated Makefile? What parts are likeliest to confuse him? What could be done to fix that?

Joe, my fictional character, is about as little like me as as is plausible at a programming shop in 2020, and that’s the point. If I ask abstractly “What can I do to improve reposurgeon’s UI?”, it is likely I will just end up spinning my wheels; if, instead, I ask “What does Joe see when he looks at this?” I am more likely to get a useful answer.

It works even better if, even having learned what you can from your imaginary Joe, you make up other characters that are different from you and as different from each other as possible. For example, meet Jane the system administrator, who got stuck with the conversion job because her boss thinks of version-control systems as an administrative detail and doesn’t want to spend programmer time on it. What do her eyes see?

In fact, the technique is so powerful that I got an idea while writing this example. Maybe in reposurgeon’s interactive mode it should issue a first like that says “Interactive help is available; type ‘help’ for a topic menu.”

However. If you search the web for “design by user story”, what you are likely to find doesn’t resemble my previous description at all. Mostly, now twenty years after the beginnings of “agile programming”, you’ll see formulaic stuff equating “user story” with a one-sentence soundbite of the form “As an X, I want to do Y”. This will be surrounded by a lot of talk about processes and scrum masters and scribbling things on index cards.

There is so much gone wrong with this it is hard to even know where to begin. Let’s start with the fact that one of the original agile slogans was “Individuals and Interactions Over Processes and Tools”. That slogan could be read in a number of different ways, but under none of them at all does it make sense to abandon a method for extended insight into the reactions of your likely users for a one-sentence parody of the method that is surrounded and hemmed in by bureaucratic process-gabble.

This is embedded in a larger story about how “agile” went wrong. The composers of the Agile Manifesto intended it to be a liberating force, a more humane and effective way to organize software development work that would connect developers to their users to the benefit of both. A few of the ideas that came out of it were positive and important – besides design by user story, test-centric development and refactoring leap to mind,

Sad to say, though, the way “user stories” became trivialized in most versions of agile is all too representative of what it has often become under the influence of two corrupting forces. One is fad-chasers looking to make a buck on it, selling it like snake oil to managers forever perplexed by low productivity, high defect rates, and inability to make deadlines. Another is the managers’ own willingness to sacrifice productivity gains for the illusion of process control.

It may be too late to save “agile” in general from becoming a deadening parody of what it was originally intended to be, but it’s not too late to save design by user story. To do this, we need to bear down on some points that its inventors and popularizers were never publicly clear about, possibly because they themselves didn’t entirely understand what they had found.

Point one is how and why it works. Design by user story is a trick you play on your social-monkey brain that uses its fondness for narrative and characters to get you to step out of your own shoes.

Yes, sure, there’s a philosophical argument that stepping out of your shoes in this sense is impossible; Joe, being your fiction, is limited by what you can imagine. Nevertheless, this brain hack actually works. Eppure, si muove; you can generate insights with it that you wouldn’t have had otherwise.

Point two is that design by user story works regardless of the rest of your methodology. You don’t have to buy any of the assumptions or jargon or processes that usually fly in formation with it to get use out of it.

Point three is that design by user story is not a technique for generating code, it’ s a technique for changing your mind. If you approach it in an overly narrow and instrumental way, you won’t imagine apparently irrelevant details like what kinds of video games Joe likes. But you should do that sort of thing; the brain hack works in exact proportion to how much imaginative life you give your characters.

(Which in particular, is why “As an X, I want to do Y” is such a sadly reductive parody. This formula is designed to stereotype the process, but stereotyping is the enemy of novelty, and novelty is exactly what you want to generate.)

A few of my readers might have the right kind of experience for this to sound familiar. The mental process is similar to what in theater and cinema is called “method acting.” The goal is also similar – to generate situational responses that are outside your normal habits.

Once again: you have to get past tools and practices to discover that the important part of software design – the most difficult and worthwhile part – is mindset. In this case, and temporarily, someone else’s.

Rules for rioters

Post Syndicated from esr original http://esr.ibiblio.org/?p=8708

I had business outside today. I need to go in towards Philly, closer to the riots, to get a new PSU put into the Great Beast. I went armed; I’ve been carrying at all times awake since Philadelphia started to burn and there were occasional reports of looters heading into the suburbs in other cities.

I knew I might be heading into civil unrest today. It didn’t happen. But it still could.

Therefore I’m announcing my rules of engagement should any of the riots connected with the atrocious murder of George Floyd reach the vicinity of my person.

  1. I will shoot any person engaging in arson or other life-threatening behavior, issuing a warning to cease first if safety permits.
  2. Blacks and other minorities are otherwise safe from my gun; they have a legitimate grievance in the matter of this murder, and what they’re doing to their own neighborhoods and lives will be punishment enough for the utter folly of their means of expression once the dust settles.
  3. White rioters, on the other hand, will be presumed to be Antifa Communists attempting to manipulate this tragedy for Communist political ends; them I consider “enemies-general of all mankind, to be dealt with as wolves are” and will shoot immediately, without mercy or warning.

Designing tasteful CLIs: a case study

Post Syndicated from esr original http://esr.ibiblio.org/?p=8697

Yesterday evening my apprentice, Ian Bruene, tossed a design question at me.

Ian is working on a utility he calls “igor” intended to script interactions with GitLab, a major public forge site. Like many such sites, it has a sort of remote-procedure-call interface that allows you, as an alternative to clicky-dancing on the visible Web interface, to pass it JSON datagrams and get back responses that do useful things like – for example – publishing a release tarball of a project where GitLab users can easily find it.

Igor is going to have (actually, already has) one mode that looks like a command interpreter for a little minilanguage, with each command being an action verb like “upload” or “release”. The idea is not so much for users to drive this manually as for them to be able to write scripts in the minilanguage which become part of a project’s canned release procedure. (This is why GUIs are irrelevant to this whole discussion; you can’t script a GUI.)

Ian, quite reasonably, also wants users to be able to run simple igor commands in a fire-and-forget mode by typing “igor” followed by command-line arguments. Now, classically, under Unix, you would expect a single-line “release” command to be designed to look something like this:

$ igor -r -n fooproject -t 1.2.3 foo-1.2.3.tgz

(To be clear, the dollar sign on the left is a shell prompt, put in to emphasize that this is something you type direct to a shell.)

In this invocation, the “-r” option says “I want to do a release”, the -n option says “This is the GitLab name of the project I’m shipping a release of”, the -t option specifies a release tag, and the following filename argument is the name of the tarball you want to publish.

It might not look exactly like this. Maybe there’d be yet another switch that lets you attach a release notes file. Maybe you’d have the utility deduce the project name from the directory it’s running in. But the basic style of this CLI (= Command Line Interface), with option flags like -r that act as command verbs and other flags that exist to attach their arguments to the request, is very familiar to any Unix user. This what most Unix system commands look like.

One of the design rules of the old-school style is that the first token on the line that is not a switch argument terminates recognition of switches. It, and all tokens after it, are treated as arguments to be passed to the program and are normally expected to be filenames (or, in the 21st century, filename-like things like URLs).

Another characteristic of this style is that the order of the switch clauses is not fixed. You could write

$ igor -t 1.2.3 -n fooproject -r foo-1.2.3.tgz

and it would mean the same thing. (Order of the following arguments, on the other hand, will usually be significant if there is more than one.)

For purposes of this post I’m going to call this style old-school UNIX CLI, because Ian’s puzzlement comes from a collision he’s having with a newer style of doing things. And, actually, with a third interface style, also ancient but still vigorous.

When those of us in Unix-land only had the old-school CLI style as a model it was difficult to realize that all of those switches, though compact and easy to type, imposed a relatively high cognitive load. They were, and still are, difficult to remember. But we couldn’t really notice this until we had something to contrast it with that met similar functional requirements with lower cognitive effort.

Though there may have been earlier precedents, the first well-known program to use something recognizably like what I will call new-school CLI was the CVS version control system. The distinguishing trope was this: Each CVS command begins with a subcommand verb, like “cvs update” or “cvs checkout”. If there are switches, they normally follow the subcommand rather than preceding it. And there are fewer switches.

Later version-control systems like Subversion and Mercurial picked up on the subcommand idea and used it to further reduce the number of arbitrary-looking switches users had to remember. In Subversion, especially, your normal workflow could consist of a sequence of svn add, svn update, svn status, and svn commit commands during which you’d never type anything that looked like an old-school Unixy switch at all. This was easy to remember, easy to document, and users liked it.

Users liked it because humans are used to remembering associations between actions and natural-language verbs; “release” is less of a memory load than “-r” even if it takes longer to type. Which illuminates one of the drivers of the old-school style; it was shaped back in the 1970s by 110-baud Teletypes on which terseness and only having to type few characters was a powerful virtue.

After Subversion and Mercurial Git came along, with its CLI written in a style that, though it uses leading subcommand verbs, is rather more switch-heavy. From the point of view of being comfortable for users (especially new users), this was a pretty serious regression from Subversion. But then the CLI of git wasn’t really a design at all, it was an accretion of features that there was little attempt to simplify or systematize. It’s fair to say that git has succeeded despite its rather spiky UI rather than because of it.

Git is, however a digression here; I’ve mainly described it to make clear that you can lose the comfort benefits of the new-school CLI if a lot of old-school-style switches crowd in around the action verbs.

Next we need to look at a third UI style, which I’m going to call “GDB style” because the best-known program that uses it today is the GNU symbolic debugger. It’s almost as ancient as old-school CLIs, going back to the early 1980s at least.

A program like GDB is almost never invoked as a one-liner at all; a command is something you type to its internal command prompt, not the shell. As with new-school CLIs like Subversuon’s, all commands begin with an action verb, but there are no switches. Each space-separated token after the verb on the command line is passed to the command handler as a positional argument.

Part of Igor’s interface is intended to be a GDB-style interpreter. In that, the release command should logically look something like this, with igor’s command prompt at the left margin.

igor> release fooproject 1.2.3 foo-1.2.3.tgz

Note that this is the same arguments in the same order as our old-school “igor -r” command, but now -r has been replaced by a command verb and the order of what follows it is fixed. If we were designing Igor to be Subversion-like, with a fire-and-forget interface and no internal command interpreter at all, it would correspond to a shell command line like this:

$ igor release fooproject 1.2.3 foo-1.2.3.tgz

This is where we get to the collision of design styles I referred to earlier. What was really confusing Ian, I think, is that part of his experience was pulling for old-school fire-and-forget with switches, part of his experience was pulling for new-school as filtered through git’s rather botched version of it, and then there is this internal GDB-like interpreter to reconcile with how the command line works.

My apprentice’s confusion was completely reasonable. There’s a real question here which the tradition he’s immersed in has no canned, best-practices answer for. Git and GDB evade it in equal and opposite ways – Git by not having any internal interpreter like GDB, GDB by not being designed to do anything in a fire-and-forget mode without going through its internal interpreter.

The question is: how do you design a tool that (a) has a GDB like internal interpreter for a command minilanguage, (b) also allows you to write useful fire-and-forget one-liners in the shell without diving into that interpreter, (c) has syntax for those one liners that looks like an old-school CLI, and (d) has only one syntax for each command?

And the answer is: you can’t actually satisfy all four of those constraints at once. One of them has to give. It’s trivially true that if you abandon (a) or (b) you evade the problem, the way Git and GDB do. The real problem is that an old-school CLI wants to have terse switch clauses with flexible order, a GDB-style minilanguage wants to have more verbose commands with positional arguments, and never these twain shall meet.

The only one-syntax-for-each-command choice you can make is to have the same command interpreter parse your command line and what the user types to the internal prompt.

I bit this bullet when I designed reposurgeon, which is why a fire-and-forget command to read a stream dump of a Subversion repository and build a live repository from it looks like this:

$ reposurgeon "read <project .svn" "prefer git" "rebuild ../overthere"

Each of those string arguments is just fed to reposurgeon’s internal interpreter; any attempt to look like an old-school CLI has been abandoned. This way, I can fire and forget multiple reposurgeon commands; for Igor, it might be more appropriate to pass all the tokens on the command line as a single command.

The other possible way Igor could go is to have a command language for the internal interpreter in which each line looks like a new-school shell command with a command verb followed by switch clusters:

$ release -t 1.2.3 -n fooproject foo-1.2.3.tgz

Which is fine except that now we’ve violated some of the implicit rules of the GDB style. Those aren’t simple positional arguments, and we’re back to the higher cognitive load of having to remember cryptic switches.

But maybe that’s what your prospective users would be comfortable with, because it fits their established habits! This seems to me unlikely but possible.

Design questions like these generally come down to having some sense of who your audience is. Who are they? What do they want? What will surprise them the least? What will fit into their existing workflows and tools best?

I could launch into a panegyric on the agile-programming practice of design-by-user-story at this point; I think this is one of the things agile most clearly gets right. Instead, I’ll leave the reader with a recommendation to read up on that idea and learn how to do it right. Your users will be grateful.

Two graceful finishes

Post Syndicated from esr original http://esr.ibiblio.org/?p=8694

I’m having a rather odd feeling.

Reposurgeon. It’s…done; it’s a finished tool, fully fit for its intended purposes. After nine years of work and thinking, there’s nothing serious left on the to-do list. Nothing to do until someone files a bug or something in its environment changes, like someone writing an exporter/importer pair it doesn’t know about and should.

When you wrestle with a problem that is difficult and worthy for long enough, the problem becomes part of you. Having that go away is actually a bit disconcerting, like putting your foot on a step that’s not there. But it’s OK; there are lots of other interesting problems out there and I’m sure one will find me to replace reposurgeon’s place in my life.

I might try to write a synoptic look back on the project at some point.

Looking over some back blog posts on reposurgeon, I became aware that I never told my blog audience the last bit of the saga following my ankle surgery. That’s because there was no drama. The ankle is now fully healed and as solidly functional as though I never injured it at all – I’ve even stopped having residual aches in damp weather.

Evidently the internal cartilage healed up completely, which is far from a given with this sort of injury. My thanks to everyone who was supportive when I literally couldn’t walk.

Term of the day: builder gloves

Post Syndicated from esr original http://esr.ibiblio.org/?p=8688

Another in my continuing series of attempts to coin, or popularize, terms that software engineers don’t know they need yet. This one comes from my apprentice, Ian Bruene.

“Builder gloves” is the special knowledge possessed by the builder of a tool which allows the builder to use it without getting fingers burned.

Software that requires builder gloves to use is almost always faulty. There are rare exceptions to this rule, when the application area of the software is so arcane that the builder’s specialist knowledge is essential to driving it. But usually the way to bet is that if your code requires builder gloves it is half-baked, buggy, has a poorly designed UI or is poorly documented.

When you ship software that you know requires builder gloves, or someone else tells you that it seems to require builder gloves, it could ruin someone else’s day and reflect badly on you. But if you believe in releasing early and often, sometimes half-baked is going to happen. Here’s how to mitigate the problem.

1. Warn the users what’s buggy and unstable in your release notes and the rest of your documentation.

2. Document your assumptions where the user can see them,

3. Work harder at not being a terrible UI designer.

Becoming really good at software engineering requires that you care about the experience the user sees, not just the code you can see.

This is your final warning

Post Syndicated from esr original http://esr.ibiblio.org/?p=8685

Earlier today, armed demonstrators stormed the Michigan State House protesting the state’s stay-at-home order.

I’m not going to delve in to the specific politics around the stay-at-home order, or whether I think it’s a good idea or a bad one, because there is a more important point to be made here. Actually, two important points.

(1) Nobody got shot. These protesters were not out-of-control yahoos intent on violence. This was a carefully calibrated and very controlled demonstration.

(2) This is the American constitutional system working correctly and as designed by the Founders. When the patience of the people has been pushed past its limit by tyranny and usurpation, armed revolt is what is supposed to happen. The threat of popular armed revolt is an intentional and central part of our system of checks and balances.

We aren’t at that point yet, though. The Michigan legislators should consider that they have received a final warning before actual shooting. The protesters demonstrated and threatened just as George Washington, Thomas Jefferson, Patrick Henry, and other Founders expected and wanted citizens to demonstrate and threaten in like circumstances.

I am sure there will be calls from the usual suspects to tighten gun laws and arrest the protesters as domestic terrorists. All of which will miss the point. Nobody got shot – this was the last attempt, within the norms of the Constitutional system as designed, to avoid violence.

If the Michigan state government responds to this demonstration with repression or violence, citizens will have the right – indeed, they will have a Constitutional duty – to correct the arrogance of power via armed revolt.

This was your final warning, legislators. Choose wisely.

Lassie errors

Post Syndicated from esr original http://esr.ibiblio.org/?p=8674

I didn’t invent this term, but boosting the signal gives me a good excuse for a rant against its referent.

Lassie was a fictional dog. In all her literary, film, and TV adaptations the most recurring plot device was some character getting in trouble (in the print original, two brothers lost in a snowstorm; in popular false memory “Little Timmy fell in a well”, though this never actually happened in the movies or TV series) and Lassie running home to bark at other humans to get them to follow her to the rescue.

In software, “Lassie error” is a diagnostic message that barks “error” while being comprehensively unhelpful about what is actually going on. The term seems to have first surfaced on Twitter in early 2020; there is evidence in the thread of at least two independent inventions, and I would be unsurprised to learn of others.

In the Unix world, a particularly notorious Lassie error is what the ancient line-oriented Unix editor “ed” does on a command error. It says “?” and waits for another command – which is especially confusing since ed doesn’t have a command prompt. Ken Thompson had an almost unique excuse for extreme terseness, as ed was written in 1973 to run on a computer orders of magnitude less capable than the embedded processor in your keyboard.

Herewith the burden of my rant: You are not Ken Thompson, 1973 is a long time gone, and all the cost gradients around error reporting have changed. If you ever hear this term used about one of your error messages, you have screwed up. You should immediately apologize to the person who used it and correct your mistake.

Part of your responsibility as a software engineer, if you take your craft seriously, is to minimize the costs that your own mistakes or failures to anticipate exceptional conditions inflict on others. Users have enough friction costs when software works perfectly; when it fails, you are piling insult on that injury if your Lassie error leaves them without a clue about how to recover.

Really this term is unfair to Lassie, who as a dog didn’t have much of a vocabulary with which to convey nuances. You, as a human, have no such excuse. Every error message you write should contain a description of what went wrong in plain language, and – when error recovery is possible – contain actionable advice about how to recover.

This remains true when you are dealing with user errors. How you deal with (say) a user mistake in configuration-file syntax is part of the user interface of your program just as surely as the normally visible controls are. It is no less important to get that communication right; in fact, it may be more important – because a user encountering an error is a user in trouble that he needs help to get out of. When Little Timmy falls down a well you constructed and put in his path, your responsibility to say something helpful doesn’t lessen just because Timmy made the immediate mistake.

A design pattern I’ve seen used successfully is for immediate error messages to include both a one-line summary of the error and a cookie (like “E2317”) which can be used to look up a longer description including known causes of the problem and remedies. In a hypothetical example, the pair might look like this:

Out of memory during stream parsing (E1723)

E1723: Program ran out of memory while building the deserialized internal representation of a stream dump. Try lowering the value of GOGC to cause more frequent garbage collections, increasing the size if your swap partition, or moving to hardware with more RAM.

The key point here is that the user is not left in the lurch. The messages are not a meaningless bark-bark, but the beginning of a diagnosis and repair sequence.

If the thought of improving user experience in general leaves you unmoved, consider that the pain you prevent with an informative error message is rather likely to be your own, as you use your software months or years down the road or are required to answer pesky questions about it.

As with good comments in your code, it is perhaps most motivating to think of informative error messages as a form of anticipatory mercy towards your future self.

Payload, singleton, and stride lengths

Post Syndicated from esr original http://esr.ibiblio.org/?p=8663

Once again I’m inventing terms for useful distinctions that programmers need to make and sometimes get confused about because they lack precise language.

The motivation today is some issues that came up while I was trying to refactor some data representations to reduce reposurgeon’s working set. I realized that there are no fewer than three different things we can mean by the “length” of a structure in a language like C, Go, or Rust – and no terms to distinguish these senses.

Before reading these definitions, you might to do a quick read through The Lost Art of Structure Packing.

The first definition is payload length. That is the sum of the lengths of all the data fields in the structure.

The second is stride length. This is the length of the structure with any interior padding and with the trailing padding or dead space required when you have an array of them. This padding is forced by the fact that on most hardware, an instance of a structure normally needs to have the alignment of its widest member for fastest access. If you’re working in C, sizeof gives you back a stride length in bytes.

I derived the term “stride length” for individual structures from a well-established traditional use of “stride” for array programming in PL/1 and FORTRAN that is decades old.

Stride length and payload length coincide if the structure has no interior or trailing padding. This can sometimes happen when you get an arrangement of fields exactly right, or your compiler might have a pragma to force tight packing even though fields may have to be accessed by slower multi-instruction sequences.

“Singleton length” is the term you’re least likely to need. It’s the length of a structure with interior padding but without trailing padding. The reason I’m dubbing it “singleton” length is that it might be relevant in situations where you’re declaring a single instance of a struct not in an array.

Consider the following declarations in C on a 64-bit machine:

struct {int64_t a; int32_t b} x;
char y

That structure has a payload length of 12 bytes. Instances of it in an array would normally have a stride length of 16 bytes, with the last two bytes being padding. But in this situation, with a single instance, your compiler might well place the storage for y in the byte immediately following x.b, where there would trailing padding in an array element.

This struct has a singleton length of 12, same as its payload length. But these are not necessarily identical, Consider this:

struct {int64_t a; char b[6]; int32_t c} x;

The way this is normally laid out in memory it will have two bytes of interior padding after b, then 4 bytes of trailing padding after c. Its payload length is 8 + 6 + 4 = 20; its stride length is 8 + 8 + 8 = 24; and its singleton length is 8 + 6 + 2 + 4 = 22.

To avoid confusion, you should develop a habit: any time someone speaks or writes about the “length” of a structure, stop and ask: is this payload length, stride length, or singleton length?

Most usually the answer will be stride length. But someday, most likely when you’re working close to the metal on some low-power embedded system, it might be payload or singleton length – and the difference might actually matter.

Even when it doesn’t matter, having a more exact mental model is good for reducing the frequency of times you have to stop and check yourself because a detail is vague. The map is not the territory, but with a better map you’ll get lost less often.

Insights need you to keep your nerve

Post Syndicated from esr original http://esr.ibiblio.org/?p=8657

This is a story I’ve occasionally told various friends when one of the subjects it touches comes up. I told it again last night, and it occurred to me that I ought to put in the blog. It’s about how, if you want to have productive insights, you need a certain kind of nerve or self-belief.

Many years ago – possibly as far back as the late 80s – I happened across a film of a roomful of Sufi dervishes performing a mystical/devotional exercise called “dhikr”. The film was very old, grainy B&W footage from the early 20th century. It showed a roomful of bearded, turbaned, be-robed men swaying, spinning, and chanting. Some were gazing at bright objects that might have been lamps, or polished metal or jewelry reflecting other lamps – it wasn’t easy to tell from the footage.

I can’t find the footage I saw, but the flavor was a bit like this. No unison movement in what I saw, though – individuals doing different things and ignoring each other, more inward-focused.

The text accompanying the film explained that the intention of “dhikr” is to shut out the imperfect sensory world so the dervish can focus on the pure and holy name of Allah. “Right,” I thought, already having had quite a bit of experience as an experimental mystic myself, “I get this. In Zen language, they’re shutting down the drunken monkeys. Autohypnosis inducing a serene mind, nothing surprising here.”

But there was something else. Something about the induction methods they were using. It all seemed oddly familiar, more than it ought to. I had seen behaviors like this before somewhere, from people who weren’t wearing pre-Kemalist Turkish garb. I watched the film…and it hit me. This was exactly like watching a roomful of people with serious autism!

The rocking. The droning. The fixated behavior, or in the Sufi case the behavior designed to induce fixation. Which immediately led to the next question: why? I think the least hypothesis in cases where you observe parallel behaviors is that they have parallel causation. We know what the Sufis tell us about what they’re doing; might it tell us what the autists are doing what they’re doing?

The Sufis are trying to shut out sense data. What if the autists are too? That would imply that the autists live in a state of what is, for them, perpetual sensory overload. Their dhikr-like behaviors are a coping mechanism, an attempt to turn down the gain on their sensors so they can have some peace inside their own skulls.

The first applications of nerve I want to talk about here are (a) the nerve to believe that autistic behaviors have an explanation more interesting than “uhhh…those people are randomly broken”, and (b) the nerve to believe that you can apply a heuristic like “parallel behavior, parallel causes” to humans when you picked it up from animal ethology.

Insights need creativity and mental flexibility, but they also need you to keep your nerve. I think there are some very common forms of failing to keep your nerve that people who would like to have good and novel ideas self-sabotage with. One is “If that were true, somebody would have noticed it years ago”. Another is “Only certified specialists in X are likely to have good novel ideas about X, and I’m not a specialist in X, so it’s a bad risk to try following through.”

You, dear reader, are almost certainly browsing this blog because I’m pretty good at not falling victim to those, and duly became famous by having a few good ideas that I didn’t drop on the floor. However, in this case, I failed to keep my nerve in another bog-standard way: I believed an expert who said my idea was silly.

That was decades ago. Nowadays, the idea that autists have a sensory-overload problem is not even controversial – in fact it’s well integrated into therapeutic recommendations. I don’t know when that changed, because I haven’t followed autism research closely enough. Might even be the case that somewhere in the research literature, someone other than me has noticed the similarity between semi-compulsive autistic behaviors and Sufi dhikr, or other similar autohypnotic practices associated with mystical schools.

But I got there before the experts did. And dropped the idea because my nerve failed.

Now, it can be argued that there were good reasons for me not to have pursued it. Getting a real hearing for a heterodox idea is difficult in fields where the experts all have their own theories they’re heavily invested in, and success is unlikely enough that perhaps it wasn’t an efficient use of my time to try. That’s a sad reason, but in principle a sound one.

But losing my nerve because an expert laughed at me, that was not sound. I think I wouldn’t make that mistake today; I’m tougher and more confident than I used to be, in part because I’ve had “crazy” ideas that I’ve lived to see become everyone’s conventional wisdom.

You can read this as a variation on a theme I developed in Eric and the Quantum Experts: A Cautionary Tale. But it bears repeating. If you want to be successfully creative, your insights need you to keep your nerve.

2020-04-04 пробно online ИББ

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3440

Преди някакво време затвориха “Кривото”, което беше на ъгъла на “Дондуков” и “Будапеща”, и с него изчезна и ИББ. Аз не бях успявал да стигна до там от много време и точно седмицата, в която обявиха пандемията се канех да ходя…

Та, има разговори да го възобновим, и понеже няма как на живо, тая вечер си направихме едно събиране на jitsi.ludost.net/ibb, експериментално, да видим как е. Бяхме десетина човека, поговорихме си за час-два, пихме по едно, хапнахме си и си отидохме да спим 🙂 Та вероятно тая сряда ще направим пак сбирка, за който му липсва. Не изглежда да е възможно да си пуснем всички камерите и да не ни се подпалят машините, но само с аудио пак е добре.

Определено мисля да го броя в категорията неща, които ни помагат да не полудеем 🙂

PSA: COVID-19 is a bad reason to get a firearm

Post Syndicated from esr original http://esr.ibiblio.org/?p=8627

I’m a long-time advocate of more ordinary citizens getting themselves firearms and learning to use them safely and competently. But this is a public-service announcement: if you’re thinking of running out to buy a gun because of COVID-19, please don’t.

There are disaster scenarios in which getting armed up in a hurry makes sense; the precondition for all of them is a collapse of civil order. That’s not going to happen with COVID-19 – the mortality rate is too low.

Be aware that the gun culture doesn’t like and doesn’t trust panic buyers; they tend to be annoying flake cases who are more of liability than an asset. We prefer a higher-quality intake than we can get in the middle of a plague panic. Slow down. Think. And if you’ve somehow formed the idea that you’re in a zombie movie or a Road Warrior sequel, chill. That’s not a useful reaction; it can lead to panic shootings and those are never good.

I don’t mean to discourage anyone from buying guns in the general case – more armed citizens are a good thing on multiple levels. After we’re through the worst of this would be a good time for it. But do it calmly, learn the Four Rules of Firearms Safety first, and train, train, train. Get good with your weapons, and confident enough not to shoot unless you have to, before the next episode of shit-hits-the-fan.

2020-03-21 threat модели

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3439

Голям проблем е очакването, че всички знаят това, което и ти знаеш. Следващото е провокирано от един разговор преди малко.

В момента всички усилено търсят начин да си пренесат работата от вкъщи. Аз съм от щастливците – при нас по принцип всичко може да се прави remote и сме си направили така нещата (което е следствие от голямото количество админи на глава от фирмата, май сме над 50%). Също така заради естеството на работата сме обмисляли сериозно как да направим нещата, че да са сигурни, и първата стъпка в това е да се реши какъв е threat модела (т.е. “от какво/кого се пазим?”), понеже това определя смислеността на мерките, които се взимат.

Най-лесният въпрос е “от кого се пазим”, да преценим какви са му възможностите, и какви мерки да вземем. Базовите опции са “script kiddies”, “опортюнисти”, “конкуренцията”, “държавата”, “NSA”.

“Script kiddies” са най-лесни, и са в общи линии стандартните мерки, които всички взимат – от това да няма тривиални пароли и да няма не-patch-нати service-и, до това хората да знаят да не отварят опасни документи (или ако работата им е да получават документи, да имат среда, в която да ги гледат безопасно, което е случая с журналистите например).

“Опортюнисти” са хора на вярното място във верния момент. Пример за такова нещо са например работещи в datacenter, които знаят, че ей-сега ще ги уволнят, и си отмъкват диск от произволен сървър. Или крадци, които са се докопали до някакви ваши неща и са ги угасили и събрали, и не искате информацията там да попада в чужди ръце или да може да се открадне. Или някакви по-таргетирани scammer-и, и какво ли не. Тук се иска малко по-сериозно да се обмислят слабите места и да се вземат мерки (които може да са “пълним машината с бетон, завинтваме я за пода и я заваряваме”), които да са смислени за ситуацията.

“Конкуренцията” е много близко до горните, само че по-прицелена. Тук има и други пътища, като напр. легални – не съм запознат с българското законодателство по темата, но който му е важно, ще трябва да си го разгледа). Тук и средствата може да са повече, т.е. може конкуренцията ви да плати на някой от предната точка за нещо по-интересно.

“Държавата” е супер интересна, понеже тук си проличава утопичното мислене на хората. Криптирани файлове, секретни канали и т.н. – голяма част от това пази от предните точки, не от тази. От метода на гумения маркуч (т.е. да те бият докато си кажеш) не се пазим с технически средства, а много повече с оперативни, като например никой в засегнатата юрисдикция да няма достъп до нещата, които се пазят. Особено текущата ситуация с карантината и извънредното положение го показва много добре – ако приемем, че сме на територията на противника, който има няколко порядъка по-големи ресурси от нас, и като изключим хората, които обучават специално за целта (т.е. шпиони), които имат зад гърба си сериозна поддържаща система, и при които има над 10% заловяемост (защото оперативната сигурност е много трудно нещо), няма някой друг, за който да е смислено да мисли подобни мерки.
Или с по-прости думи, ако всичко ви е в държава в извънредно положение, колкото да са и некадърни в някакви отношения органите, със сигурност умеят търсенето и биенето и всякакви мерки трябва да взимат това предвид, например да се стараем да не ни е противник държавата…

“NSA” е по-скоро събирателно за това, което се нарича “APT” (Advanced Persistent Threat, т.е. хора с много акъл, желание и ресурси), и е пример за това кога трябва да имаме airgap firewall-и и други подобни крайни мерки, като фарадееви кафези, неща които изискват минимум двама човека и т.н.. Ако се занимавате с платежни системи и подобни неща, това в общи линии ви влиза в threat модела, иначе е много малко вероятно и много скъпо да се пазите от тях.

Така че, ако си планирате как организацията ви да работи от вкъщи, изяснете си от какво сте се пазили досега, постигнете същото (което в повечето случаи са едни VPN-и, малко обучение и малко хардуер като микрофони), и чак след това решете дали всъщност това е адекватно и го подобрявайте. Но е важно да не се взимат мерки само заради самите мерки.

2020-03-20 първа седмица

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3438

Интересна седмица.

Една прилична част мина в настройване на звука на хора – първо на преподаващите в курса по операционни системи във ФМИ, после на всичките хора от фирмата, наваксване с работата, готвене, още работа, и всякакви неща “какво можем да направим в това положение, което да е смислено”.

В момента курсът по операционни системи ползва за част от нещата комбинация от live streaming (с около 10 секунди латентност, може да се опитаме да го смъкнем) и irc, и за някои други неща jitsi meet (подкарах едно при мен на jitsi.ludost.net, който иска може да го ползва). Като цяло гледаме да ползваме локални ресурси, и от гледна точка на латентност, и от гледна точка на това да не удавим външните тръби без да искаме.
(това с тръбите май малко се преувеличава де, в България има бая хубав интернет и това, че netflix ограничиха да stream-ват само SD по време на пандемията по-скоро говори за липса на капацитет при тях)
(но е доста важно да не зависиш от твърде много външни ресурси)

Jitsi-то и irc-то също са доста полезни, като гледам, а хората за да не полудяват си правят сутрешното кафе по видео конференции. Други хора се организират да сглобяват emergency вентилатори (има някъде fb група по темата, в която се намират и малко медицински лица), някои печатат на 3d принтери компонентите за предпазен шлем при интубиране и подобни (друг линк до файлове за такова нещо) (даже мисля, че принтера в initLab вече е впрегнат по темата).

Забелязах и как като една от бабите излизаше от входа (някъде в началото на седмицата), една от съседките и предложи да и взима отвън каквото и е нужно, да не се излага на риск, накратко, въпреки че има някакви идиоти, времената изглежда все пак да изкарат и хубавото в повечето хора.

Та оцеляхме една седмица. Остават поне още 7. Аз лично ще броя за голямо постижение, ако не се стигне до изкарването на армията по улиците (което може да се случи и без наложителна причина, просто за да се опита някой да трупа политически капитал) или не полудеят всички.

Наздраве и лека нощ.

2020-03-13 streaming по време на чума

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3437

Живеем в интересни времена. Даже идват от Китай.

Anyway. В последните няколко дни имам различни разговори по темата как може да се продължат различни преподавателски дейности online. Почти сме сглобили нещо за упражненията по “Операционни системи” във ФМИ, и си говоря с един приятел, който обмисля да сглоби нещо такова за училището си.

Ще напиша нещо по-подробно с как/какво може и т.н., но понеже не съм виждал преподаване в последните 10 години, и училищно такова от 20, много ще се радвам, ако някой каже от какво има нужда – доколко да има обратна връзка/да се виждат учещите, какви неща за показване (т.е. нещо да играе ролята на бялата дъска), дали трябва да се вижда самия преподаващ или дъската/екрана му стигат, и такива неща.

The right to be rude

Post Syndicated from esr original http://esr.ibiblio.org/?p=8609

The historian Robert Conquest once wrote: “The behavior of any bureaucratic organization can best be understood by assuming that it is controlled by a secret cabal of its enemies.”

Today I learned that the Open Source Initiative has reached that point of bureaucratization. I was kicked off their lists for being too rhetorically forceful in opposing certain recent attempts to subvert OSD clauses 5 and 6. This despite the fact that I had vocal support from multiple list members who thanked me for being willing to speak out.

It shouldn’t be news to anyone that there is an effort afoot to change – I would say corrupt – the fundamental premises of the open-source culture. Instead of meritocracy and “show me the code”, we are now urged to behave so that no-one will ever feel uncomfortable.

The effect – the intended effect, I should say, is to diminish the prestige and autonomy of people who do the work – write the code – in favor of self-appointed tone-policers. In the process, the freedom to speak necessary truths even when the manner in which they are expressed is unpleasant is being gradually strangled.

And that is bad for us. Very bad. Both directly – it damages our self-correction process – and in its second-order effects. The habit of institutional tone policing, even when well-intentioned, too easily slides into the active censorship of disfavored views.

The cost of a culture in which avoiding offense trumps the liberty to speak is that crybullies control the discourse. To our great shame, people who should know better – such as the OSI list moderators and BOD – have internalized anticipatory surrender to crybullying. They no longer even wait for the soi-disant victims to complain before wielding the ban-hammer.

We are being social-hacked from being a culture in which freedom is the highest value to one in which it is trumped by the suppression of wrongthink and wrongspeak. Our enemies – people like Coraline Ada-Ehmke – do not even really bother to hide this objective.

Our culture is not fatally damaged yet, but the trend is not good. OSI has been suborned and is betraying its founding commitment to freedom. “Codes of Conduct” that purport to regulate even off-project speech have become all too common.

Wake up and speak out. Embrace the right to be rude – not because “rude” in itself is a good thing, but because the degenerative slide into suppression of disfavored opinions has to be stopped right where it starts, at the tone policing.

2020-02-20 “Лично досие”

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3436

(имам да пиша и за FOSDEM, ама като стигна до там)

От миналата година от двете наистина важни излезли книги – биографията на Едуард Сноудън и Mindf*ck за историята около Cambridge analytica, първата тия дни излиза на български (издават я Егмонт, което мен много ме учуди). Ако не сте я чели в оригинал и/или имате познати, които не могат да я четат в оригинал, определено си струва.

(моето общо с това е, че съм технически редактор на българския превод, та всякакви оплаквания по темата – към мен)

Комбинацията от двете книги показва нещо много важно – колко лесно може да се събира ефективно масово информация за всички хора по света, и как може да се използва срещу тях. Съответно може да замените “тях” с “вас” за да усетите по-добре какво значи на практика.

Chinese bioweapon II: Electric Boogaloo

Post Syndicated from esr original http://esr.ibiblio.org/?p=8587

Yikes. Despite the withdrawal of the Indian paper arguing that the Wuhan virus showed signs of engineering, the hypothesis that that it’s an escaped bioweapon looks stronger than ever.

Why do I say this? Because it looks like my previous inclination to believe the rough correctness of the official statistics – as conveyed by the Johns Hopkins tracker – was wrong. I now think the Chinese are in way deeper shit than they’re admitting.

My willingness to believe the official line didn’t stem from any credulity about what the Chinese government would do if it believed the truth wouldn’t serve. As Communists they are lying evil scum pretty much by definition, and denial would have been politically attractive for as long as they thought they could nip the pandemic in the bud. I thought their incentives had flipped and they would now be honest as a way of assisting their own countermeasures and seeking international help.

My first clue that I was wrong about that came from a friend who is plugged into the diaspora Chinese community. According to him, there is terrifying video being sent from Chinese clans to the overseas branches they planted in the West to prepare a soft landing in case they have to bail out of China. Video of streets littered with corpses. And of living victims exhibiting symptoms like St. Vitus’s Dance (aka Sydenham’a chorea), which means the virus is attacking central nervous systems.

My second clue was the Tencent leak. Read about it here; the takeaway is that there is now reason to believe that as of Feb 1st the actual coronavirus toll looked like this: confirmed cases 154023, suspected cases 79808, cured 269, deaths 24589.

Compare that with the Johns Hopkins tracker numbers for today, a week later: Confirmed cases 31207, cured 1733, deaths 638. Allowing for the Tencent leak being roughly one doubling period earlier, the official statistics have been lowballing the confirmed case number by a factor of about 8 and the deaths by a factor of about 80. And then inflating cures by a factor of about 12.

Even given what I’d heard about the video, I might have remained skeptical about the leak numbers if someone (don’t remember who or where) hadn’t pointed out that the ratio between reported cases and deaths has been suspiciously constant in the official Chinese statistics. In uncooked statistics one would expect more noise in that ratio, if only because of reporting problems.

So my present judgment, subject to change on further evidence, is that the Tencent-leak numbers are the PRC’s actual statistics. And that has a lot of grim implications.

One is that the Wuhan virus has at least a 15% fatality rate in confirmed cases – and most ways the PRC’s own statistics could be off due to reporting problems would drive it higher. Another is that containment in China has failed. Even in the cooked official statistics first derivative has not fallen; the doubling time is on the close order of five days now and may decrease.

We are already well past any even theoretical coping capability of China’s medical infrastructure. For that matter, it isn’t likely that there are enough trained medical personnel on the entire planet to get on top of a pandemic this size with a 5-day doubling time.

Which means this thing is probably not going to top out in China until it saturates the percentage of the population without natural immunity and kills at least 15% of them. The big, grim question is how many natural immunes there are. The history of past natural pandemics does not conduce to any optimism at all about that.

A very safe prediction is that a whole lot of elderly Chinese people are going to die because their immune systems are pre-compromised.

China’s population is about 1.4 billion. Conservatively, therefore, we can already expect this plague to kill more people in China than the Black Death did in Europe. At its present velocity we can expect that in about 12 doubling periods, or approximately 60 days.

Meanwhile, coronavirus spread outside China enters a critical time.

Based on what we think we know about the incubation period (about 14 days), if there’s going to be a pandemic breakout outside of China due to asymptomatic carriers, we should start to see a slope change in the overseas incidence curve during the next week. It’s been long enough for that now.

If that doesn’t happen, either the rest of the world dodged the pandemic bullet (optimistic) or the low end of the incubation period is longer than has been thought (pessimistic). On the basis of previous experience with SARS and MERS, I think the optimistic read is more likely to be correct.

Now back to the bioweapon hypothesis. Does recent data strengthen or weaken it? Consider:

* 645 Indian evacuees from Wuhan all tested negative.

* The only death outside China has been an ethnic-Chinese traveler from Wuhan.

The evidence that this virus likes to eat Han Chinese and almost ignores everybody else is mounting. That’s bioweapon-like selectivity.

One of my previous objections to the bioweapon hypothesis was that the Wuhan virus’s lethality wasn’t high enough. At 15% or higher I withdraw that objection.

And that St. Vitus’s Dance thing – coronaviruses don’t do that. But it’s exactly the kind of thing you’d engineer into a terror weapon intended not just to kill a chunk of your target population but break the morale of the rest.

Finally, my friend Phil Salkie tells me that on Google Maps the reported location of the Wuhan Institute of Virology has been jumping around like a Mexican flea. That’s guilty behavior, that is.

Head-voice vs. quiet-mind

Post Syndicated from esr original http://esr.ibiblio.org/?p=8558

I’m utterly boggled. Yesterday, out of nowhere, I learned of a fundamental divide in how peoples’ mental lives work about which I had had no previous idea at all.

From this: Today I Learned That Not Everyone Has An Internal Monologue And It Has Ruined My Day.

My reaction to that title can be rendered in language as – “Wait. People actually have internal monologues? Those aren’t just a cheesy artistic convention used to concretize the welter of pre-verbal feelings and images and maps bubbling in peoples’ brains?”

Apparently not. I’m what I have now learned to call a quiet-mind. I don’t have an internal narrator constantly expressing my thinking in language; in shorthand. I’m not a head-voice person. So much not so that when I follow the usual convention of rendering quotes from my thinking as though they were spoken to myself, I always feel somewhat as though I’m lying, fabulating to my readers. It’s not like that at all! I justify doing that only because the full multiordinality of my actual thought-forms won’t fit through my fingers.

But, apparently, for others it often is like that. Yesterday I learned that the world is full of head-voice people who report that they don’t know what they’re thinking until the narratizer says it. Judging by the reaction to the article it seems us quiet-minds are a minority, one in five or fewer. And that completely messes with my head.

What’s the point? Why do you head-voice people need a narrator to tell you what your own mind is doing? I fully realize this question could be be reflected with “Why don’t you need one, Eric?” but it is quite disturbing in either direction.

So now I’m going to report some interesting detail. There are exactly two circumstances under which I have head-voice. One is when I’m writing or consciously framing spoken communication. Then, my compositional output does indeed manifest as narratizing head-voice. The other circumstances is the kind of hypnogogic experience I reported in Sometimes I hear voices.

Outside of those two circumstances, no head-voice. Instead, my thought forms are a jumble of words, images, and things like diagrams (a commenter on Instapundit spoke of “concept maps” and yeah, a lot of it is like that). To speak or write I have to down-sample this flood of pre-verbal stuff into language, a process I am not normally aware of except as an occasional vague and uneasy sense of how much I have thrown away.

(A friend reports Richard Feynman observing that ‘You don’t describe the shape of a camshaft to yourself.” No; you visualize a camshaft, then work with that visualization in your head. Well, if you can – some people can’t. I therefore dub the pre-verbal level “camshaft thinking.”)

To be fully aware of that pre-verbal, camshaft-thinking level I have to go into a meditative or hypnogogic state. Then I can observe that underneath my normal mental life is a vast roar of constant free associations, apparently random memory retrievals, and weird spurts of logic connecting things, only some of which passes filters to present to my conscious attention.

I don’t think much or any of this roar is language. What it probably is, is the shock-front described in the predictive-processing model of how the brain works – where the constant inrush of sense-data meets the brain’s attempt to fit it to prior predictive models.

So for me there are actually three levels: (1) the roaring flood of free association, which I normally don’t observe; (2) the filtered pre-verbal stream of consciousness, mostly camshaft thinking, that is my normal experience of self, and (3) narratized head-voice when I’m writing or thinking about what to say to other people.

I certainly do not head-voice when I program. No, that’s all camshaft thinking – concept maps of data structures, chains of logic. processing that is like mathematical reasoning though not identical to it. After the fact I can sometimes describe parts of this process in language, but it doesn’t happen in language.

Learning that other people mostly hang out at (3), with a constant internal monologue…this is to me unutterably bizarre. A day later I’m still having trouble actually believing it. But I’ve been talking with wife and friends, and the evidence is overwhelming that it’s true.

Language…it’s so small. And linear. Of course camshaft thinking is intrinsically limited by the capabilities of the brain and senses, but less so. So why do most people further limit themselves by being in head-voice thinking most of the time? What’s the advantage to this? Why are quiet-minds a minority?

I think the answers to these questions might be really important.

Missing documentation and the reproduction problem

Post Syndicated from esr original http://esr.ibiblio.org/?p=8551

I recently took some criticism over the fact that reposurgeon has no documentation that is an easy introduction for beginners.

After contemplating the undeniable truth of this criticism for a while, I realized that I might have something useful to say about the process and problems of documentation in general – something I didn’t already bring out in
How to write narrative documentation. If you haven’t read that yet, doing so before you read the rest of this mini-essay would be a good idea.

“Why doesn’t reposurgeon have easy introductory documentation” would normally have a simple answer: because the author, like all too many programmers, hates writing documentation, has never gotten very good at it, and will evade frantically when under pressure to try. But in my case none of that description is even slightly true. Like Donald Knuth, I consider writing good documentation an integral and enjoyable part of the art of software engineering. If you don’t learn to do it well you are short-changing not just your users but yourself.

So, with all that said, “Why doesn’t reposurgeon have easy introductory documentation” actually becomes a much more interesting question. I knew there was some good reason I’d never tried to write any, but until I read Elijah Newren’s critique I never bothered to analyze for the reason. He incidentally said something very useful by mentioning gdb (the GNU symbolic debugger), and that started me thinking, and now think I understand something general.

If you go looking for gdb intro documentation, you’ll find it’s also pretty terrible. Examples of a few basic commands is all they can do; you never get an entire worked example of using gdb to identify and fix a failure point. And why is this?

The gdb maintainers probably aren’t very self-aware about this, but I think at bottom it’s because the attempt would be futile. Yes, you could include a session capture of someone diagnosing and debugging a simple problem with gdb, but the reader couldn’t reliably reproduce it. How would you the user go about generating a binary on which the replicating the same commands produced the same results?

For an extremely opposite example, consider the documentation for an image editor such as GIMP. It can have excellent documentation precisely because including worked examples that the reader can easily understand and reproduce is almost trivial to arrange.

What’s my implicit premise here? This: High-quality introductory software documentation depends on worked examples that are understandable and reproducible. If your software’s problem domain features serious technical barriers to mounting and stuffing a gallery of reproducible examples, you have a problem that even great willingness and excellent writing skills can’t fix.

Of course my punchline is that reposurgeon has this problem, and arguably an even worse example of it than gdb’s. How would you make a worked example of a repository conversion that is both nontrivial and reproducible? What would that even look like?

In the gdb documentation, you could in theory write a buggy variant of “Hello, World!” with a crash due to null pointer dereference and walk the reader through locating it with gdb. It would be a ritual gesture in the right direction, but essentially useless because the example is too trivial. It would read as a pointless tease.

Similarly, the reposurgeon documentation could include a worked conversion example on a tiny synthetic repository and be no better off than before. In both problem domains reproducibility implies triviality!

Having identified the deep problem, I’d love to be able to say something revelatory and upbeat about how to solve it.

The obvious inversion would be something like this: to improve the quality of your introductory documentation, design your software so that user reproduction of instructive examples is as easy as its problem domain allows.

I understand that this only pushes the boundaries of the problem. It doesn’t tell you what to do when you’re in a problem domain as intrinsically hostile to reproduction of examples as gdb and reposurgeon are.

Unfortunately, at this point I am out of answers. Perhaps the regulars on my blog will come up with some interesting angle.