All posts by arp242.net

s/bash/zsh/g

Post Syndicated from arp242.net original https://www.arp242.net/why-zsh.html

You would expect this to work, no?

bash% echo $(( .1 + .2 ))
bash: .1 + .2 : syntax error: operand expected (error token is ".1 + .2 ")

Well, bash says no, but zsh just works:

zsh% echo $(( .1 + .2 ))
0.30000000000000004      # Well, "works" insofar IEEE-754 works.

There is simply no way you can do calculations with fractions in bash without
relying on bc, dc, or some hacks. Compared to simply being able to use a +
b
it’s ugly, slow, and difficult.

There are other pretty frustrating omissions in bash; NUL bytes is another fun
one:

zsh% x=$(printf 'N\x00L'); printf $x | xxd -g1 -c3
00000000: 4e 00 4c  N.L

bash% x=$(printf 'N\x00L'); printf $x | xxd -g1 -c3
bash: warning: command substitution: ignored null byte in input
00000000: 4e 4c     NL

It looks like bash added a warning recently-ish (4.4-patch 2); this one
bit me pretty hard a few years ago; back then it would just get silently
discarded without warning; I guess a warning is an “improvement” of sorts
(fixing it, however, would be an actual improvement[1]).

NUL bytes aren’t that uncommon, think of e.g. find -print0, xargs -0, etc.
That all works grand, right up to the point you try to assign it to a variable.
You can use NUL bytes for array assignments though, if you evoke the right
incantation:

bash% read -rad arr < <(find . -print0)

There are all sorts of edge-cases where you need to resort to read or
readarray rather than being able to just assign in. In zsh it’s just
arr=($(find . -print0)).

Don’t even think of doing something like:

img=$(curl https://example.com/image.png)
if [[ $cond ]]; then
    optpng <<<"$img" > out.png
else
    cat <<<"$img" > out.png
fi

Of course you can refactor this to avoid the variable (and the example is a bit
contrived), but it really ought to work. I once wrote a script to import emails
from the Mailgun API. It worked great, yet sometimes images were mangled and I
just couldn’t figure out why. Turns out Mailgun “helpfully” decodes attachments
(i.e. removes the base64) and sends binary content, which bash (at the time)
would silently discard. It took me a very long time to figure out. I forgot why,
but it was hard to avoid storing the response in a variable. I ended up
rewriting it to Python because of this, which was just a waste of time really.
It’s this, specifically, that really soured me on bash and prompted The shell
scripting trap
in 2017. However, many items listed there are solved by
zsh, including this one.


zsh also fixes most of the quoting:

zsh% for f in *; ls -l $f
-rw-rw-r-- 1 martin martin 0 Oct 19 06:51 asd.txt
-rw-rw-r-- 1 martin martin 0 Oct 19 06:51 with space.txt

bash% for f in *; do ls -l $f; done
-rw-rw-r-- 1 martin martin 0 Oct 19 06:51 asd.txt
ls: cannot access 'with': No such file or directory
ls: cannot access 'space.txt': No such file or directory

It’s not POSIX compatible, but who cares? bash doesn’t follow POSIX in all
sorts of ways by default
either because it just makes more sense, and with
both you can still tell them to be POSIX-compatible if you must for one reason
or the other.

Also note the convenient short version of the
for loop: no need for do, done and muckery with ; before the done,
which is much more convenient for quick one-liners you interactively type in.
You can still do word splitting, but you need to do it explicitly:

zsh% for i in *; ls -l $=i
-rw-rw-r-- 1 martin martin 0 Oct 19 06:51 asd.txt
ls: cannot access 'with': No such file or directory
ls: cannot access 'space.txt': No such file or directory

[[ is supposed to fix [, but it still has weird quoting quirks:

zsh% a=foo; b=*;
zsh% if [[ $a = $b ]]; then
       print 'Equal!'
     else
       print 'Nope!'
     fi
Nope!

bash% a=foo; b=*
bash% if [[ $a = $b ]]; then
        echo 'Equal!'
      else
        echo 'Nope!'
      fi
Equal!

This is equal because without quotes it still interprets the right-hand side as
a pattern (i.e. glob). In zsh you need to use $~var to explicitly enable
pattern matching, which is a much better model than remembering when you do and
don’t need to quote things – sometimes you do want the pattern matching and
then you don’t want quotes; it’s not always immediately obvious if an if [[ ...
statement is correct if it’s lacking quotes.

“But Martin, you should always quote things, you’re being disingenuous!” Well, I
could make a comfortable living if I were paid to add quotes to other people’s
shell scripts. Telling people to “always quote things” is what we’ve been doing
for 40 years now and irrefutable observational evidence has demonstrated that it
just does not work.

Most people aren’t shell scripting wizards; they make a living writing Python or
C or Go or PHP programs, or maybe they’re sysadmins or scientists, and oh,
occasionally they also write a shell script. They just see something that works
and assume it has sane behaviour and don’t realize the subtle differences
between $@, $*, and "$@". I think that’s actually quite reasonable,
because the behaviour is odd, surprising, and confusing.

It’s also a lot more complex than just “quote your variables”, especially if you
use $(..) since command substitution often needs quotes too, as well as any
variables inside it. Before you know it you’ve got double, triple, or more
levels of nested quotes and if you forget one set of them you’re in trouble.

It’s such a fertile source of real-world bugs that it would merit entomologist
examination. If there’s a system that causes this many bugs then that system is
at fault, and not the people using it. Computers and software should adapt to
humans, not the other way around.

And “always quote things!” isn’t even right either, because you should always
quote things except when you shouldn’t:

zsh% a=foo; b=.*;
zsh% if [[ "$a" =~ "$b" ]]; then
       print 'Equal!'
     else
       print 'Nope!'
     fi
Equal!

bash% a=foo; b=.*
bash% if [[ "$a" =~ "$b" ]]; then
        echo 'Equal!'
      else
        echo 'Nope!'
      fi
Nope!

If there are quotes around a regexp then it’s treated as a literal string. I
mean, it’s consistent with = pattern matching, but also confusing because I
explicitly use =~ to match a regular expression.

Another famous quoting trap is $@ vs. "$@" vs. $* vs. "$*":

zsh% cat args
echo "unquoted @:"
for a in $@; do echo "  => $a"; done

echo "quoted @:"
for a in "$@"; do echo "  => $a"; done

echo "quoted *:"
for a in $*; do echo "  => $a"; done

echo "quoted *:"
for a in "$*"; do echo "  => $a"; done

bash% bash args
unquoted @:
  => hello
  => world
  => test
  => space
  => Guust1129.jpg
  => IEEESTD.2019.8766229.pdf
  [.. rest of my $HOME ..]
quoted @:
  => hello
  => world
  => test space
  => *
unquoted *:
  => hello
  => world
  => test
  => space
  => Guust1129.jpg
  => IEEESTD.2019.8766229.pdf
  [.. rest of my $HOME ..]
quoted *:
  => hello world test space *

Experiences shellers will know to (almost) always use "$@", but how often do
you see it being done wrong? It’s not that strange people do it wrong either; if
you learned about quoting and word splitting then $@ without quotes is
actually the logical thing to use because you would expect "$@" to be
treated as one argument (as "$*"). You tell people to “always add quotes to
prevent word splitting and treat things as a single argument”, and then you tell
them “oh, except in this one special case when the addition of quotes invokes
a special kind of splitting and doesn’t follow any of the rules we previously
told you about”.

In zsh, $@ and $* (and $argv) are all (read-only) arrays and it all
behaves identical as you would expect with no surprises:

zsh% zsh args hello world 'test space' '*'
unquoted @:
  => hello
  => world
  => test space
  => *
quoted @:
  => hello
  => world
  => test space
  => *
unquoted *:
  => hello
  => world
  => test space
  => *
quoted *:
  => hello world test space *

Actually in bash you can do argv=("$@") and then you have an array. This is
really how it should work by default.

You still need to loop over it with:

bash% for a in "${argv[@]}"; do
        echo "=> $a"
      done

Rather than just for a in $argv like in zsh. Aside from the pointless [@],
why would you ever want to word-split every element of an array? There is
probably some use case somewhere, but it’s exceedingly rare. Better to just skip
all of that by default unless explicitly enabled with = and/or ~.

Oh, here’s another interesting tidbit:

zsh% n=3; for i in {1..$n}; print $i
1
2
3

bash% n=3; for i in {1..$n}; do echo "$i"; done
{1..3}

bash% n=3; for i in {1..3}; do echo "$i"; done
1
2
3

Why does it work like that? That’s left as an exercise for the reader 😉


Aside from all sorts caveats that are handled much better, a lot of common
things are just much easier in zsh:

zsh% arr=(123 5 1 9)
zsh% echo ${(o)arr}     # Lexical order
1 123 5 9
zsh% echo ${(on)arr}    # Numeric order
1 5 9 123

bash% IFS=$'\n'; echo "$(sort <<<"${arr[*]}")"; unset IFS
1 123 5 9
bash% IFS=$'\n'; echo "$(sort -n <<<"${arr[*]}")"; unset IFS
1 5 9 123

I had to look up how to do it in bash; the Stack Overflow answer for that one
starts with “you don’t really need all that much code”. lol? I guess that’s in
reply to some of the other horrendously complex answers which implement “pure
bash” sorting algorithms and the like. I guess compared to that this is “not
that much code”. And of course the entire thing is a minefield of expansion
again; and if you forget a set of nested quotes you end up in trouble.

Arrays in general are just awkward in bash:

bash% arr=(first second third fourth)

bash% echo ${arr[0]}
first
bash% echo ${arr[@]::2}
first second

I mean, it works, and it’s not that much typing, but why do I need that [@]?
Probably some (historical) reason, but zsh implements it much more readable and
easier:

zsh% arr=(first second third fourth)

zsh% print ${arr[1]}        # Yeah, it's 1-based. Deal with it.
first
zsh% print ${arr[1,2]}
first second

The bash array syntax was copied from ksh; so I guess we have to blame David
Korn (zsh supports it too, if you must use it). But regular subscripts are just
so much easier.

And then there’s the useful features:

zsh% ls *.go
format.go  format_test.go  gen.go  old.go  uni.go  uni_test.go

zsh% ls *.go~*_test.go
format.go  gen.go  old.go  uni.go

zsh% ls *.go~*_test.go~f*
gen.go  old.go  uni.go

*.go gets expanded, and filters the pattern after the ~; *_test.go in this
case. Looks a bit obscure at first glance, but bash’s ksh-style extglobs are far
harder:

bash% ls !(*_test).go
format.go  gen.go  old.go  uni.go

bash% ls !(*_test|f*).go
gen.go  old.go  uni.go

!(..) is “match anything except the pattern”; the * is implied here (zsh
supports !(..) if you set ksh_glob). While it works, the the
pattern~filter~filter model is much easier, and also more flexible since you
don’t need to start with all matches.

There are many useful things you can do with globbing; you can replace many uses
of find with it, and you don’t need to worry about the caveats, -print0
hacks, etc. For example to recursively list all regular files:

zsh% ls **/*(.)
LICENSE         go.sum           unidata/gen.go             wasm/make*
README.md       old.go           unidata/gen_codepoints.go  wasm/srv*
[..]

Or directories:

zsh% ls -d /etc/**/*(/)
/etc/OpenCL/                      /etc/runit/runsvdir/default/dnscrypt-proxy/log/
/etc/OpenCL/vendors/              /etc/runit/runsvdir/default/ntpd/log/
/etc/X11/                         /etc/runit/runsvdir/default/postgresql/supervise/
[..]

Or files that were changed in the last week:

zsh% ls -l /etc/***(.m-7)    # *** is a shortcut for **/*; needs GLOB_STAR_SHORT
-rw-r--r-- 1 root root 28099 Oct 13 03:47 /etc/dnscrypt-proxy.toml.new-2.1.1_1
-rw-r--r-- 1 root root    97 Oct 13 03:47 /etc/environment
-rw-r--r-- 1 root root 37109 Oct 17 10:34 /etc/ld.so.cache
-rw-r--r-- 1 root root 77941 Oct 19 01:01 /etc/public-resolvers.md
-rw-r--r-- 1 root root  6011 Oct 19 01:01 /etc/relays.md
-rw-r--r-- 1 root root   142 Oct 19 07:57 /etc/shells

You can even order them by modified date with om (***(.m-7om)), although
that’s a bit pointless here as ls will reorder them again, but if you’re
looping over files it’s useful.

There is no way to do any of this in bash, you’ll have to use something like:

bash% find /etc -type f -mtime 7 -exec ls -l {} +
find: ‘/etc/sv/docker/supervise’: Permission denied
find: ‘/etc/sv/docker/log/supervise’: Permission denied
find: ‘/etc/sv/bluetoothd/log/supervise’: Permission denied
find: ‘/etc/sv/postgresql/supervise’: Permission denied
find: ‘/etc/sv/runsvdir-martin/supervise’: Permission denied
find: ‘/etc/wpa_supplicant/supervise’: Permission denied
find: ‘/etc/lvm/cache’: Permission denied
-rw-rw-r-- 1 root root  167 Oct 12 22:17 /etc/default/postgresql
-rw-r--r-- 1 root root  817 Oct 12 09:11 /etc/fstab
-rw-r--r-- 1 root root 1398 Oct 12 22:19 /etc/passwd
-rw-r--r-- 1 root root 1397 Oct 12 22:19 /etc/passwd.OLD
-rw-r--r-- 1 root root  307 Oct 12 23:10 /etc/public-resolvers.md.minisig
-rw-r--r-- 1 root root  297 Oct 12 23:10 /etc/relays.md.minisig
-r-------- 1 root root  932 Oct 12 09:57 /etc/shadow
-rwxrwxr-x 1 root root  397 Oct 12 22:23 /etc/sv/postgresql/run

Not sure how to make it ignore these errors too without redirecting stderr (more
typing!) And if you think adding single letters in (..) after a pattern is
hard then try understanding find’s weird flag syntax. Glob qualifies are
great.

csh-style parameter substitution is pretty useful:

zsh% for f in ~/photos/*.png; convert $f ${f:t:r}.jpeg

:t to get the tail, and :r to get the root (without extension). csh could do
this before I was even born, but bash can’t (it can for history expansion, but
not variables). According to the bash FAQ “Posix has specified a more powerful,
albeit somewhat more cryptic, mechanism cribbed from ksh”, which I find a
somewhat curious statement as the above in bash is:

bash% for f in ~/photos/*.png; do convert "$f" "$(basename "${f%%.*}")"; done

Technically “more powerful” in the sense that you can do other things with it,
but not really “more useful for common operations” (zsh, of course, implements
% and # as well).

Note you can’t nest ${..} in bash; e.g. "${${f%%.*}##*/}" is an error:

zsh% f=~/asd.png; print "${${f%%.*}##*/}"
asd

bash% f=~/a.png; echo "${${f%%.*}##*/}"
bash: ${${f%%.*}##*/}: bad substitution

While this can quickly lead to very unreadable ASCII vomit, it’s useful on
occasion, when used with care and wisdom. You can click below for a more
advanced example if the children are already in bed.

Click to see NSFW content. Not suitable for children under 18!

For example, this can be used to show the longest element in an array:

print ${array[(r)${(l.${#${(O@)array//?/X}[1]}..?.)}]}

Cribbed from the zsh User’s Guide.


There are many more things. I’m not going to list them all here. None of this is
new; much (if not all?) of this has around for 20 years, if not longer. I don’t
know why bash is the de-facto default, or why people spend time on complex
solutions to work around bash problems when zsh solves them. I guess because
Linux used a lot of GNU stuff and bash was came with it, and GNU stuff was (and
is) using bash. Not a very good reason, certainly not one 30 years later.

zsh still has plenty of limitations; the syntax isn’t always something you’d
want to show your mother for starters, as well as a number of other things.
Still, it’s clearly better. I genuinely can’t find a single thing bash does
better beyond “it’s installed on many systems already”.

Ubiquitousness is overrated anyway; zsh has no dependencies beyond libc and
curses and is 970K on my system[2] and is available for pretty much all
systems. Compared to most other interpreters it’s tiny, with only Lua being
smaller (275K). “Stick to POSIX sh for compatibility” was good advice in 1990
when you had some SunOS system with some sun-sh and that’s what you were stuck
with. Those days are long gone, and while there are a few poor souls still suck
on those systems (sometimes even with csh!) chances are they’re not going to try
and run your Docker or Arch Linux shell script or whatnot on those systems
anyway.

Contorting yourself in all sorts of strange bends to perhaps possibly maybe make
it work for a tiny fraction of users who are unlikely to use your script anyway
does not seem like a good trade-off, especially since these kind of limitations
tend to be organisational rather than technical, which is not my problem anyway
to be honest.

Using zsh is also more portable; since it allows you to avoid many shell tools
and the (potential) incompatibilities, and by explicitly setting zsh as the
interpreter you can rely on zsh behaviour, rather than hoping the /bin/sh on
$random_system behaves the same (even dash has some extensions to POSIX sh, such
as local).

What I typically do is save files as script.zsh with:

#!/usr/bin/env zsh
[ "${ZSH_VERSION:-}" = "" ] && echo >&2 "Only works with zsh" && exit 1

This makes sure it gets run by zsh when used as ./script.zsh, and gives an
error in case people type sh script.zsh or bash script.zsh in case the .zsh
extension isn’t enough of a clue.

So in conclusion: s/bash/zsh/g and everything will be just a little bit
better.

P.S. maybe fish is even better by the way, but I could never get over the bright
colouring and all these things popping in and out of my screen; it’s such a
“busy” chaotic experience! I’d have to spend time disabling all that stuff to
make it usable for me, and I never bothered – and if you’re disabling the key
selling points of a program then it’s probably not intended for you anyway.
Maybe I’ll have a crack at it at some point though.

Footnotes

  1. Which I assume isn’t so easy, otherwise it would have been done already.
    The reason it doesn’t work is an artifact from C’s NUL-terminated strings,
    but this kind of stuff really shouldn’t be exposed in a high-level language
    like the shell. It’s also a bit ironic since one of Stephen Bourne’s
    original goals with his shell was to get rid of arbitrary size limits on
    strings, which were common at the time. 

  2. A full install if ~8M, mostly in the optional completion functions it
    ships with. Bash is about 1.3M by the way. 

Getting started with RimWorld modding on Linux

Post Syndicated from arp242.net original https://www.arp242.net/rimworld-mod-linux.html

This describes how to create RimWorld mods on Linux; this is an introduction to
both RimWorld modding and developing C♯ with Mono; it’s essentially the steps I
followed to get started.

This doesn’t assume any knowledge of Unity, Mono, or C♯ but some familiarity
with Linux and general programming is assumed; if you’re completely new to
programming then this probably isn’t a good resource. A lot of this will work on
Windows or macOS too; it’s just the C♯ build steps that are really
Linux-specific, as are various pathnames etc.

RimWorld mods consist of two parts:

  • A set of XML definitions (“Defs”) which defines everything from items, actions
    you can take, research projects, weather, etc. This is the “glue” that
    actually makes stuff appear in the game, applies effects, etc.

    For (very) simple mods this may actually be enough, and no “real” coding is
    required. You can use XML files to both add new stuff, and RimWorld has
    facilities to patch existing in-game content.

  • C♯ code which either adds entire new stuff, or monkey-patches existing code.

As an example we’ll make a little mod that makes it rain blood. Why? It seemed
easy enough to do while also exploring some of the core concepts. Also, I was
playing Slayer when I started on this. The complete example mod is on
GitHub
, but I encourage people to modify things manually (and maybe play
around with things a bit) rather than copy/paste stuff from there; it’s just a
better way to learn things.

Getting started

Before we start with the C♯ stuff let’s set up a basic mod which adds a new
weather type; we just need to edit some XML for this.

Mods are located in the Mods/ directory in your RimWorld installation
directory; I’m using the version I bought from the RimWorld website and
extracted to ~/rimworld so that’s nice and simple. GOG.com games usually store
the actual game data in a game/ subdirectory, and I don’t know where Steam
stores things 🤷

This directory should already exist with a Mods/Place mods here.txt. A mod
must have an About/About.xml file; a minimal version looks like:

<?xml version="1.0" encoding="utf-8"?>
<ModMetaData>
    <!-- Must contain a dot; usually <author>.<modname> -->
    <packageId>arp242.RainingBlood</packageId>
    <name>Raining blood</name>

    <!-- Game versions this mod supports. More on game versions later. -->
    <supportedVersions>
        <li>1.1</li>
        <li>1.2</li>
        <li>1.3</li>
    </supportedVersions>
</ModMetaData>

See ModUpdating.txt in the RimWorld installation directory for a full
description of the About.xml fields. For now, this is enough.

The official content uses essentially the same structure as a mod except that
it’s in the Data/ directory; e.g. Data/Core/ contains the base game,
Data/Royalty the Royalty expansion, etc. To find the weather definitions I
just used:

[~/rimworld/Data/Core]% ls (#i)**/*weather*.xml
Defs/WeatherDefs/Weathers.xml

The (#i) makes things case-insensitive in zsh, FYI. zsh is nice. You can also
use find -iname if you enjoy more typing.

Weathers.xml seems to define all the weather types. I copied the definition of
“rain” to Mods/RainingBlood/Defs/WeatherDefs/RainingBlood.xml with some
modifications:

<?xml version="1.0" encoding="utf-8" ?>
<Defs>
<WeatherDef>
	<defName>RainingBlood</defName>
	<label>raining blood</label>
	<description>It's raining blood; what the hell?!</description>

    <!-- ThoughtDefs/RainingBlood.xml -->
    <!-- <exposedThought>SoakingWet</exposedThought> -->
	<exposedThought>BloodCovered</exposedThought>

	<!-- Copied from rain -->
	<temperatureRange>0~100</temperatureRange>
	<windSpeedFactor>1.5</windSpeedFactor>
	<accuracyMultiplier>0.8</accuracyMultiplier>
	<favorability>Neutral</favorability>
	<perceivePriority>1</perceivePriority>

	<rainRate>1</rainRate>
	<moveSpeedMultiplier>0.9</moveSpeedMultiplier>
	<ambientSounds>
		<li>Ambient_Rain</li>
	</ambientSounds>
	<overlayClasses>
		<li>WeatherOverlay_Rain</li>
	</overlayClasses>
	<commonalityRainfallFactor>
		<points>
			<li>(0, 0)</li>
			<li>(1300, 1)</li>
			<li>(4000, 3.0)</li>
		</points>
	</commonalityRainfallFactor>

	<!-- Colours modified to be reddish; just a crude effect. -->
	<skyColorsDay>
		<sky>(0.8,0.2,0.2)</sky>
		<shadow>(0.92,0.2,0.2)</shadow>
		<overlay>(0.7,0.2,0.2)</overlay>
		<saturation>0.9</saturation>
	</skyColorsDay>

	<skyColorsDusk>
		<sky>(1,0,0)</sky>
		<shadow>(0.92,0.2,0.2)</shadow>
		<overlay>(0.6,0.2,0.2)</overlay>
		<saturation>0.9</saturation>
	</skyColorsDusk>

	<skyColorsNightEdge>
		<sky>(0.35,0.10,0.15)</sky>
		<shadow>(0.92,0.22,0.22)</shadow>
		<overlay>(0.5,0.1,0.1)</overlay>
		<saturation>0.9</saturation>
	</skyColorsNightEdge>

	<skyColorsNightMid>
		<sky>(0.35,0.20,0.25)</sky>
		<shadow>(0.92,0.22,0.22)</shadow>
		<overlay>(0.5,0.2,0.2)</overlay>
		<saturation>0.9</saturation>
	</skyColorsNightMid>
</WeatherDef>
</Defs>

The location where you store it doesn’t actually matter as long as it’s in
Defs; Defs/xxx.xml will work too. Internally all XML files in Defs/ are
scanned in the same data structure; it just recursively searches for *.xml
files and uses <defName>RainingBlood</defName> to identify them rather than
the path.

It’s not very fancy. We also need a new “exposed thought”; that’s the mood
modifier that shows up in the “needs” tab; “Soaking wet” doesn’t really seem
applicable if you’re “soaking wet in blood” 🙃

Let’s grep for it:

[~/rimworld/Data/Core]% rg SoakingWet
Defs/TerrainDefs/Terrain_Water.xml
12:    <traversedThought>SoakingWet</traversedThought>

Defs/ThoughtDefs/Thoughts_Memory_Misc.xml
297:    <defName>SoakingWet</defName>

Defs/WeatherDefs/Weathers.xml
98:    <exposedThought>SoakingWet</exposedThought>
202:    <exposedThought>SoakingWet</exposedThought>
267:    <exposedThought>SoakingWet</exposedThought>

Thoughts_Memory_Misc.xml seems to be what we want, so make a copy of that to
Mods/RainingBlood/Defs/ThoughtDefs/RainingBlood.xml:

<?xml version="1.0" encoding="utf-8" ?>
<Defs>
<ThoughtDef>
    <defName>BloodCovered</defName>
    <durationDays>0.1</durationDays>
    <stackLimit>1</stackLimit>
    <stages>
        <li>
            <label>blood covered</label>
            <description>I'm covered in blood; yuk!</description>
            <baseMoodEffect>-30</baseMoodEffect>
        </li>
    </stages>
</ThoughtDef>
</Defs>

The meaning of the fields in both XML files should be mostly self-explanatory,
but if you want to know what exactly something does you’ll need to decompile
the game to read the source code. We’ll cover that later.

At this point, the basic mod should be done; let’s test it.

Running the game

Start the name normally and select the mod in the Mods panel. After this you can
start the game with ./RimworldLinux -quicktest, which will start the game in a
new small map with the last selected mods.

You can select “Development mode” in options, which will give you a few buttons
at the top, it will also allow you to open the console with ` and
you can speed up things a wee bit more by pressing 4 (ludicrous speed!) Most
of the buttons etc. should be self-explanatory; there’s some more information
on the RimWorld wiki
.

Click the “debug actions” button at the top, which has “Change Weather” (filter
in the top-left corner; you may need to scroll down). After clicking RainBlood
it takes a few seconds for the weather to transition and the status to show up
in your colonists.

Patching the biomes

It’s all very good that we can select this from our magical debug actions, but
does it actually appear in a regular game? Let’s search where the
RainyThunderstorm weather is referenced (as that’s a bit more unique than just
“rain”):

[~/rimworld/Data/Core]% rg RainyThunderstorm
Defs/BiomeDefs/Biomes_Cold.xml
101:      <RainyThunderstorm>1</RainyThunderstorm>
234:      <RainyThunderstorm>1</RainyThunderstorm>
389:      <RainyThunderstorm>1</RainyThunderstorm>
513:      <RainyThunderstorm>0</RainyThunderstorm>
611:      <RainyThunderstorm>0</RainyThunderstorm>

Defs/BiomeDefs/Biomes_Temperate.xml
104:      <RainyThunderstorm>1</RainyThunderstorm>
262:      <RainyThunderstorm>1</RainyThunderstorm>

Defs/BiomeDefs/Biomes_Warm.xml
109:      <RainyThunderstorm>1.7</RainyThunderstorm>
277:      <RainyThunderstorm>1.7</RainyThunderstorm>

Defs/BiomeDefs/Biomes_WarmArid.xml
79:      <RainyThunderstorm>1</RainyThunderstorm>
204:      <RainyThunderstorm>1</RainyThunderstorm>
312:      <RainyThunderstorm>1</RainyThunderstorm>

Defs/WeatherDefs/Weathers.xml
193:    <defName>RainyThunderstorm</defName>

e.g. Biomes_Cold.xml has:

<baseWeatherCommonalities>
    <Clear>18</Clear>
    <Fog>1</Fog>
    <Rain>2</Rain>
    <DryThunderstorm>1</DryThunderstorm>
    <RainyThunderstorm>1</RainyThunderstorm>
    <FoggyRain>1</FoggyRain>
    <SnowGentle>4</SnowGentle>
    <SnowHard>4</SnowHard>
</baseWeatherCommonalities>

Now let’s try adding our bloody rain with a high chance of spawning:

<baseWeatherCommonalities>
    <Clear>18</Clear>
    <Fog>1</Fog>
    <Rain>2</Rain>
    <DryThunderstorm>1</DryThunderstorm>
    <RainyThunderstorm>1</RainyThunderstorm>
    <FoggyRain>1</FoggyRain>
    <SnowGentle>4</SnowGentle>
    <SnowHard>4</SnowHard>

    <RainingBlood>64</RainBlood> <!-- References the defName -->
</baseWeatherCommonalities>

Why 64? Well, the other numbers add up to 32 and if they’re relative weights
then 64 means a 2/3rd chance of our raining blood weather. “Trying it and seeing
what happens” is pretty much what I’m doing here. Throw enough macaroni at a
wall and sooner or later some of it will stick.[1]

The easiest way to override this is to copy the XML file to your
Mods/[..]/Defs/ directory. Again, the path doesn’t matter, it just looks at
the defName attribute; the last one overrides any previous ones.

This is pretty useful for testing, debugging, etc. as you can focus on just the
XML without worrying if it’s patched correctly. The obvious downside is that you
won’t include any future updates (which may break the game due to missing fields
etc.), and if someone decides to make a “RainingMen” mod then one will override
the other, and you can’t have both mods. You never want to do this in a
published mod, but for testing it’s useful.

Testing this is a bit annoying, since you need to wait for it to take effect.
Also, it seems the game always sets the initial weather for at least 10 in-game
days, so you may want to load a save game instead of using -quicktest.
Remember You can press 4 for if you enabled the dev console, which speeds up
the game to 15× (3 is 6×). You can also make 4 speed it up to a whopping
150× by going to the “TweakValues” developer menu and enabling
TickManager.UltraSpeedBoost. I am disappointed this is called UltraFast and
UltraSpeedBoost instead of RidiculousSpeed and LudicrousSpeed.

After confirming that our “override it all”-method works let’s properly patch
stuff. There are several ways of patching XML resources; I’ll use XPath
here, which is the easiest if you just need to patch some XML. Any XML file in
Patches/ is treated as patch.

Our patch will just all biomes in Patches/Biomes.xml, but you can select for
[defName=..] if you only want to patch specific ones. Remember to remove the
overrides if you have any.

<?xml version="1.0" encoding="utf-8" ?>
<Patch>
<!-- Class, not class! -->
<Operation Class="PatchOperationAdd">
    <xpath>/Defs/BiomeDef/baseWeatherCommonalities</xpath>
    <value>
        <RainingBlood>64</RainingBlood>
    </value>
</Operation>
</Patch>

As previously mentioned all XML files in Defs/ are in the same data structure,
so don’t worry about the pathnames. There are a number of other operations you
can do; see the full documentation for more details on how patching
works.

You can use xmllint from libxml2 to test queries on the commandline:

% xmllint --xpath '/Defs/BiomeDef/baseWeatherCommonalities' \
    Data/Core/Defs/BiomeDefs/Biomes_Cold.xml

Writing C♯ code

Let’s expand the mod a bit by making cannibals like raining blood and give
them a mood boost, rather than a mood penalty. There isn’t any way to express
that in the XML defs, so we need some code for that.

Decompiling the code

Some of the game’s source code is in the installation directory (e.g.
~/rimworld/Source) but it’s not a lot; there’s
Source/Verse/Defs/DefTypes/WeatherDef.cs, but it’s not all that useful. You
can more or less ignore this directory.

To actually figure out how to write mods we’ll need to decompile the C♯ code in
RimWorldLinux_Data/Managed/Assembly-CSharp.dll.[2] There are several
tools
for this; I’ll use ILSpy. This doesn’t seem packaged in most
distros
but there are Linux binaries for the GUI available as
AvaloniaILSpy. This seems to work well enough, but I prefer to extract all the
code at once so I can use Vim and grep and whatnot, and the GUI doesn’t seem to
do that (there is “save code”, but that doesn’t seem to do anything).

You need to build the ilspycmd binary from source, there isn’t a pre-compiled
version as far as I can find. Basic instructions:

# The ".NET home".
% export DOTNET_ROOT=$HOME/dotnet
% mkdir -p $DOTNET_ROOT

# Needs .NET SDK 5 and .NET Core 3.1; binaries from:
#   https://dotnet.microsoft.com/download/dotnet/5.0
#   https://dotnet.microsoft.com/download/dotnet/3.1
# Versions may be different; this is just indicative.
% tar xf dotnet-sdk-5.0.401-linux-x64.tar.gz -C $DOTNET_ROOT
% tar xf dotnet-sdk-3.1.413-linux-x64.tar.gz -C $DOTNET_ROOT

# Add the dotnet path, the binaries we compile later will be in ~/.dotnet/tools
% export PATH=$PATH:$HOME/dotnet:$HOME/.dotnet/tools

# Just the "source code" tar.gz from the GitHub release:
# https://github.com/icsharpcode/ILSpy/archive/refs/tags/v7.1.tar.gz
% tar xf ILSpy-7.1.tar.gz
% cd ILSpy-7.1
% dotnet tool install ilspycmd -g

# Now decompile the lot to src.
% cd ~/rimworld
% mkdir src
% ilspycmd ./RimWorldLinux_Data/Managed/Assembly-CSharp.dll -p -o src

# Hurray!
% ls src
Assembly-CSharp.csproj       FleckUtility.cs         RimWorld/
ComplexWorker_Ancient.cs     HistoryEventUtility.cs  Verse/
ComplexWorker.cs             Ionic/                  WeaponClassDef.cs
DarknessCombatUtility.cs     Properties/             WeaponClassPairDef.cs
FleckParallelizationInfo.cs  ResearchUtility.cs

You only need to do this once. Note that the DOTNET_ROOT is a runtime
dependencies of ilspycmd, so don’t remove it unless you’re sure you don’t need
to run it again.

The decompiled source doesn’t have any comments, and some variables seem changed
from the original (e.g. num1, num2, num3, etc.) but it’s mostly fairly
readable. The versions in Source do have comments, but the paths don’t quite
match up (it seems many subdirs are lost in the decompile?) I considered copying
them over to src but I’m not sure if the code in Source matches the exact
version.

Building the Assembly

“Assembly” is C♯ speak for any compiled output such as an executable (.exe) or
shared library (.dll). We need to set up a “build solution” (C♯ “Makefiles”) to
build them. Let’s start by just setting up a basic example before we start
actually writing code.

By convention the source code lives in Mods/.../Source/, but I don’t think
this is required since the game doesn’t do anything with it directly. The
resulting DLL files should be in Mods/.../Assemblies/. Note that you will use
a .dll file on Linux as well – it’s just how Mono/C♯ on Linux works. They are
cross-platform, an assembly built on Linux should also work on Windows and
vice-versa.

The game code lives in two namespaces: Verse and RimWorld. Verse is the
game engine and RimWorld is the game built on that. At least, I think that was
the intention at some point as all sort of RimWorld-specific things seem to be
in Verse (which also references the RimWorld namespace frequently) and there
isn’t really a clear dividing line, but mostly: general “engine-y things” are in
Verse and “RimWorld-y things” are in RimWorld, except when they’re not.

In Source/RainingBlood.cs we’ll add a simple example to log something to the
developer console:

namespace RainingBlood {
    [Verse.StaticConstructorOnStartup]
    public static class RainingBlood {
        static RainingBlood() {
            Verse.Log.Message("Hello, world!");
        }
    }
}

The [Verse.StaticConstructorOnStartup] annotation makes the code run when the game
starts. Basically, the game searches for all static constructors with this
annotation and startup and executes them. If you really want to know how it
works you can use something like rg '[^\[]StaticConstructorOnStartup'.

Another way is inheriting from the Verse.Mod class, which allows some more
advanced things (most notably implementing settings), but I’m not
going to cover that here.

To build this we’ll need to set up a “build solution”, which consists of a
.csproj XML file and a .sln file. This is something I mostly just copied and
modified from other projects; it seems that most people are auto-generating this
from Visual Studio or MonoDevelop, with little instructions on how to write
these things manually. There’s probably a better way of doing some things (not a
huge fan of the hard-coded paths instead of using some LDPATH analogue), but I
haven’t dived in to this yet.

Anyway, here’s what I ended up with in Mods/RainingBlood/RainingBlood.csproj:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="14.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

    <Import
        Project="$(MSBuildExtensionsPath)/$(MSBuildToolsVersion)/Microsoft.Common.props"
        Condition="Exists('$(MSBuildExtensionsPath)/$(MSBuildToolsVersion)/Microsoft.Common.props')"
    />

    <PropertyGroup>
        <RootNamespace>RainingBlood</RootNamespace>
        <AssemblyName>RainingBlood</AssemblyName>
        <!-- You probably want to modify this GUID for your mod, as it's supposed to be unique.
             This is also referenced in the .sln file.
             My system has "uuidgen" to generate UUIDs. -->
        <ProjectGuid>{7196d15e-d480-441a-a2e0-87b9696dd38f}</ProjectGuid>

        <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
        <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
        <OutputType>Library</OutputType>
        <AppDesignerFolder>Properties</AppDesignerFolder>
        <TargetFrameworkVersion>v4.7.2</TargetFrameworkVersion>
        <FileAlignment>512</FileAlignment>
        <TargetFrameworkProfile />
    </PropertyGroup>

    <!-- Debug build -->
    <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
        <DebugSymbols>false</DebugSymbols>
        <DebugType>none</DebugType>
        <Optimize>false</Optimize>
        <OutputPath>Assemblies/</OutputPath>
        <DefineConstants>DEBUG;TRACE</DefineConstants>
        <ErrorReport>prompt</ErrorReport>
        <WarningLevel>4</WarningLevel>
        <UseVSHostingProcess>false</UseVSHostingProcess>
        <Prefer32Bit>false</Prefer32Bit>
    </PropertyGroup>
    <!-- Release build -->
    <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
        <DebugType>none</DebugType>
        <Optimize>true</Optimize>
        <OutputPath>Assemblies/</OutputPath>
        <DefineConstants>TRACE</DefineConstants>
        <ErrorReport>prompt</ErrorReport>
        <WarningLevel>3</WarningLevel>
        <Prefer32Bit>false</Prefer32Bit>
    </PropertyGroup>

    <!-- Dependencies -->
    <ItemGroup>
        <!-- The main game code (RimWorld and Verse) -->
        <Reference Include="Assembly-CSharp">
            <HintPath>../../RimWorldLinux_Data/Managed/Assembly-CSharp.dll</HintPath>
            <Private>False</Private>
        </Reference>

        <!-- C#/.NET stdlib -->
        <Reference Include="System" />
        <Reference Include="System.Core" />
        <Reference Include="System.Runtime.InteropServices.RuntimeInformation" />
        <Reference Include="System.Xml.Linq" />
        <Reference Include="System.Data.DataSetExtensions" />
        <Reference Include="Microsoft.CSharp" />
        <Reference Include="System.Data" />
        <Reference Include="System.Net.Http" />
        <Reference Include="System.Xml" />
    </ItemGroup>

    <!-- File list -->
    <ItemGroup>
        <Compile Include="Source/RainingBlood.cs" />
    </ItemGroup>

    <Import Project="$(MSBuildToolsPath)/Microsoft.CSharp.targets" />
</Project>

And Mods/RainingBlood/RainingBlood.sln:

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 15
VisualStudioVersion = 15.0.27703.2035
MinimumVisualStudioVersion = 10.0.40219.1
Project("{57073194-e8b4-4a20-b60c-ee0e10947af0}") = "RainingBlood", "RainingBlood.csproj", "{7196d15e-d480-441a-a2e0-87b9696dd38f}"
EndProject
Global
    GlobalSection(SolutionConfigurationPlatforms) = preSolution
        Debug|Any CPU = Debug|Any CPU
        Release|Any CPU = Release|Any CPU
    EndGlobalSection
    GlobalSection(ProjectConfigurationPlatforms) = postSolution
        {7196d15e-d480-441a-a2e0-87b9696dd38f}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
        {7196d15e-d480-441a-a2e0-87b9696dd38f}.Debug|Any CPU.Build.0 = Debug|Any CPU
        {7196d15e-d480-441a-a2e0-87b9696dd38f}.Release|Any CPU.ActiveCfg = Release|Any CPU
        {7196d15e-d480-441a-a2e0-87b9696dd38f}.Release|Any CPU.Build.0 = Release|Any CPU
    EndGlobalSection
    GlobalSection(SolutionProperties) = preSolution
        HideSolutionNode = FALSE
    EndGlobalSection
    GlobalSection(ExtensibilityGlobals) = postSolution
        SolutionGuid = {31005EA7-3F04-446F-80B2-016137708540}
    EndGlobalSection
EndGlobal

To build it you’ll need msbuild, which is not included in the Standard Mono
installation. It does have xbuild, but that gives a deprecated warning
pointing towards msbuild. Maybe it works as well, but I didn’t try it. Luckily
msbuild does seem commonly packaged, so I just installed it from there.

I put the solution files in the project root; other people prefer to put it in
the Source/ directory, but you’ll need to modify some of the paths if you put
it there. To build it, simply run msbuild from the directory, or use msbuild
Mods/RainingBlood
to specify a path. After this you should have
Assemblies/RainingBlood.dll.

After starting the game now and opening the developer console you should see
“Hello, world!” in there.

Writing the code

Alrighty, now that all the plumbing is working we can actually start doing some
stuff. Let’s see what grepping for exposedThought gives us:

[~/rimworld/src]% rg exposedThought
Verse/AI/Pawn_MindState.cs
415: if (curWeatherLerped.exposedThought != null && !pawn.Position.Roofed(pawn.Map))
417:     pawn.needs.mood.thoughts.memories.TryGainMemoryFast(curWeatherLerped.exposedThought);

Verse/WeatherDef.cs
35: public ThoughtDef exposedThought;

Verse/AI/Pawn_MindState.cs seems to be what we want, and reading through
MindStateTick() the logic seems straightforward enough:

namespace Verse.AI {
    public class Pawn_MindState : IExposable {
        // [..]

        public void MindStateTick() {
            // [..]

            if (Find.TickManager.TicksGame % 123 == 0 &&
                pawn.Spawned && pawn.RaceProps.IsFlesh && pawn.needs.mood != null
            ) {
                TerrainDef terrain = pawn.Position.GetTerrain(pawn.Map);
                if (terrain.traversedThought != null) {
                    pawn.needs.mood.thoughts.memories.TryGainMemoryFast(terrain.traversedThought);
                }

                WeatherDef curWeatherLerped = pawn.Map.weatherManager.CurWeatherLerped;
                if (curWeatherLerped.exposedThought != null && !pawn.Position.Roofed(pawn.Map)) {
                    pawn.needs.mood.thoughts.memories.TryGainMemoryFast(curWeatherLerped.exposedThought);
                }
            }

            // [..]
        }
    }
}

So every 123rd “tick” it checks the terrain and weather and applies any mood
effects. Digging a bit deeper:

  • The Pawn class describes a person or animal (“pawn”) in the game; every Pawn
    has a MindState attached to it.

  • On every “tick” it calls the MindStateTick() method on the attached MindState
    instsance as long as the pawn isn’t dead (as well as a number of other
    things).

  • One “tick” corresponds to 1/60th real second, which is 1.44 minutes in-game
    time. This is the game’s Planck time: everything that happens will take at
    least 1.44 minutes in-game.

  • There are also “rare ticks” (= 250 ticks = 4.15 real seconds = 6 hours
    in-game, or 1/4th of a day) and “long ticks” (= 1000 ticks = 33.33 real
    seconds = 1 day in-game) that you can hook in to various places.

  • If you speed up the game then ticks are just emitted faster: 3× or 6×. So
    instead of emitting a tick once every 1/60th second it becomes once every
    1/180th second or 1/360th second.

You can find a bit more about this in Verse/Tick*.cs. It’s not really needed
to know this for a simple mod like this, but it’s useful to know if you want to
write actual real mods.

Anyway, so how to add our custom logic? Let’s first add a new “thought” we want
to apply to Defs/ThoughtDefs/RainingBlood.xml we created earlier:

<ThoughtDef>
    <defName>BloodCoveredCannibal</defName>
    <durationDays>0.1</durationDays>
    <stackLimit>1</stackLimit>
    <stages>
        <li>
            <label>blood covered</label>
            <description>Reigning in blood!</description>
            <baseMoodEffect>10</baseMoodEffect>
        </li>
    </stages>
</ThoughtDef>

And in the Defs/WeatherDefs/RainingBlood.xml let’s add some new fields next to
the exposedThought we already have:

<modExtensions>
	<!-- Class, not class! -->
	<li Class="RainingBlood.WeatherDefExtension">
		<exposedThoughtCannibal>BloodCoveredCannibal</exposedThoughtCannibal>
	</li>
</modExtensions>

The way the XML maps to C♯ code is that every entry in the XML file is expected
to be a field in the *Def class (inherits from Verse.Def), for example for
the existing exposedThought the WeatherDef class has:

public ThoughtDef exposedThought;

If you were to just add exposedThoughtCannibal you’d get an error telling you
that exposedThoughtCannibal isn’t a field in the class:

<exposedThoughtCannibal>[...] doesn't correspond to any field in in type WeatherDef

But RimWorld comes with the modExtensions field to extend Defs. In this case
we’re adding it to an entire new Def, but you can also patch existing Defs with
XPath and PatchOperationAddModExtension.

You’ll also need to add a new class inhereting from Verse.DefModExtension:

namespace RainingBlood {
    public class WeatherDefExtension : Verse.DefModExtension {
        public RimWorld.ThoughtDef exposedThoughtCannibal;
    }
}

The Class attribute in the XML links the XML fields to this class. The name
can be anything. We can get the value in C♯ with the GetModExtension<T>()
method on any Def class, where T is the type (class name) you want. For
example, GetModExtension<WeatherDefExtension>() in this case. By using the
type system multiple mods can attach their own extensions and not conflict.

Using Harmony

To make this actually do something we need to hook in some code; RimWorld
itself doesn’t really have a “mod system” for this, but we can use Harmony.
Harmony is a C♯ library to patch existing code and can do a number of things,
but the most useful (and least error-prone) is to run code before or after a
method. In our case, we want to run code after
Pawn_MindState.MindStateTick() to apply the exposedThoughtCannibal thought.

To use this we’ll need to register it as a dependency in our About/About.xml file:

<modDependencies>
    <li>
        <packageId>brrainz.harmony</packageId>
        <displayName>Harmony</displayName>
        <steamWorkshopUrl>steam://url/CommunityFilePage/2009463077</steamWorkshopUrl>
        <downloadUrl>https://github.com/pardeike/HarmonyRimWorld/releases/latest</downloadUrl>
    </li>
</modDependencies>

We’ll also have to add it to the RainingBlood.csproj file as a dependency
before the Assembly-CSharp dependency:

<!-- Dependencies -->
<ItemGroup>
    <!-- Harmony must be loaded first -->
    <Reference Include="0Harmony">
        <HintPath>../HarmonyRimWorld/Current/Assemblies/0Harmony.dll</HintPath>
        <Private>False</Private>
    </Reference> 

    <!-- The main game code (RimWorld and Verse) -->
    <Reference Include="Assembly-CSharp">
        <HintPath>../../RimWorldLinux_Data/Managed/Assembly-CSharp.dll</HintPath>
        <Private>False</Private>
    </Reference>

    [..]

Now we can use it to run some code after the Pawn_MindState.MindStateTick()
method:

namespace RainingBlood {
    public class WeatherDefExtension : Verse.DefModExtension {
        public RimWorld.ThoughtDef exposedThoughtCannibal;
    }

    [Verse.StaticConstructorOnStartup]
    public static class Patch {
        static Patch() {
            // Get the method we want to patch.
            var m = typeof(Verse.AI.Pawn_MindState).GetMethod("MindStateTick");

            // Get the method we want to run after the original.
            var post = typeof(RainingBlood.Patch).GetMethod("PostMindStateTick",
                       System.Reflection.BindingFlags.Static|System.Reflection.BindingFlags.Public);

            // Patch stuff! The string passed to the Harmony constructor can be
            // anything, and can be used to identify/remove patches if need be.
            new HarmonyLib.Harmony("arp242.rainingblood").Patch(m,
                postfix: new HarmonyLib.HarmonyMethod(post));
        }

        // The special __instance parameter has the original class instance
        // we're extending. This is based on the argument name.
        public static void PostMindStateTick(Verse.AI.Pawn_MindState __instance) {
            var pawn = __instance.pawn;

            // Same condition as MindStateTick, but inversed for early return.
            if (Verse.Find.TickManager.TicksGame % 123 != 0 ||
                !pawn.Spawned || !pawn.RaceProps.IsFlesh || pawn.needs.mood == null)
                return;

            // Is this pawn a cannibal? If not, then there's nothing to do. You
            // can also expand this by checking for the Ideology cannibalism
            // memes, but this just checks the "cannibalism" trait on colonists.
            if (!pawn.story.traits.HasTrait(RimWorld.TraitDefOf.Cannibal))
                return;

            // Let's see if the current weather has our new exposedThoughtCannibal.
            var w = pawn.Map.weatherManager.CurWeatherLerped;
            if (!w.HasModExtension<WeatherDefExtension>())
                return;
            var t = w.GetModExtension<WeatherDefExtension>().exposedThoughtCannibal;
            if (t == null)
                return;

            // Remove any existing thought that was applied and apply our
            // cannibalistic thoughts.
            if (w.exposedThought != null)
                pawn.needs.mood.thoughts.memories.RemoveMemoriesOfDef(w.exposedThought);
            pawn.needs.mood.thoughts.memories.TryGainMemoryFast(t);
        }
    }
}

I use the “manual method” here as that’s a bit easier to debug if you did
something wrong, but you can also use the annotations. Again, see the Harmony
documentation. One thing you need to watch out for is getting the
BindingFlags.[..] right. If you don’t then the reflection library won’t find
your method and it’ll return null. See the GetMethod() documentation. This
part actually took me quite a bit to get working. Unfortunately RimWorld doesn’t
have a REPL or console (AFAIK?) but you can use some printf-debugging with
Verse.Log.Message($"{var}") and the like.

I’m not going to step through the rest of the code in more detail here; I think
most of it should be obvious. I mostly just found this be looking through
various code and some strategic grepping. You can test this by using the Debug
Actions menu, which allows assigning the Cannibalism trait to a colonist.

Next steps

The above wasn’t really all that useful as such, and there are many more parts
of RimWorld modding – most of which I haven’t looked at in detail yet – but this
should at least give a decent base to get started with.

I have to say that I found a lot of documentation and guides on the topic to be
of, ehm, less-than-stellar quality :-/ The RimWorld wiki has a whole bunch of
pages, but – with a few exceptions linked in the article here – I found many are
unclear, outdated, or both, and in a few cases just downright wrong. Keep that
in mind if something doesn’t work: usually it’s a mistake to assume the
documentation is wrong instead of you, but here it might actually be the case.
I’ll see if I’ll write some more if I keep up interest in this.

Some additional reading for topics not covered:

  • Multi-version mods

    Details how to make a mod compatible with both 1.2 and 1.3. I elided this for
    simplicity, and also because quite frankly I don’t really care as I’m just
    interested in writing some mods that work for me to fix/improve some things 🤷

  • RimWorld art source

    The original PSD files for all art in RimWorld. Useful if you want to use a
    modified version in your mod.

  • Mod folder structure

    Covers some things not used in this example, such as sounds, textures, and
    i18n.

  • TDBug adds some debug things which
    seem useful. Haven’t tried it yet.

Missing parts

And some things I’d still like to improve/figure out:

  • I would really really like a REPL, debugger, or some other way to speed up the
    dev cycle. RimWorld takes fairly long to start (almost a minute on my laptop)
    and toying around with things is kinda annoying and time-consuming.

    The closest I found is How I got RimWorld debugging to work; the CLI
    works on Linux (run with dnSpy.Console.exe, from the .NET download) but the
    GUI doesn’t (and never will, as the Windows-specific GUI toolkit things aren’t
    implemented on Linux), but this doesn’t support the debugger (just
    decompilation).

    I tried the generic sdb Mono debugger, but the game doesn’t load directly
    with Mono but rather via the 32M UnityPlayer.so, so using that seems
    difficult. Using gdb works, but actually doing useful stuff with it (i.e.
    breakpoints, calling functions, displaying variable values) seems harder, but
    I haven’t spent that much time with it yet.

    Making the game start faster would help too, or an automatic “script” to run
    on startup (i.e. to apply certain debug actions).

  • On Linux the Assemblies are in RimWorldLinux_Data/, but on Windows and macOS
    this directory is RimWorldWin64_Data/ and RimWorldMac_Data/. Right now the
    build solution builds just on Linux, but I’d like to be able to make it build
    on all systems.

    Hard-coding this path seems common; to get other mods to build I had to
    manually s/Win64/Linux/ some things, which is not ideal. I couldn’t figure out
    how to make it cross-platform.

  • There are probably some other C♯/.NET things that could be improved. I’m
    really a n00b at this.

Footnotes

  1. Although I later did confirm that they’re relative weights, see
    Verse.TryRandomElementByWeight(), which can be examined after
    decompiling the source, which we’ll cover later. 

  2. Explicitly allowed in the EULA it
    turns out: “You’re allowed to ‘decompile’ our game assets and look
    through our code, art, sound, and other resources for learning
    purposes, or to use our resources as a basis or reference for a Mod.
    However, you’re not allowed to rip these resources out and pass them
    around independently.”
    I wish they’d just make this easier by
    distributing more code, but ah well. 

How to end up with 500,000 commits in your log

Post Syndicated from arp242.net original https://www.arp242.net/500k-commits.html

I posted this yesterday:

I once worked for a company where they managed to create about half a million
subversion commits in just 2 or 3 years, with about 3 developers working on
it. I’ll leave it as an exercise to guess how they managed to do that 🙂

No one guessed, which is not a surprise as it’s by far the weirdest usage of a
VCS that I’ve ever seen.

They had a server in the office which ran the SVN server and OpenVZ to give
every developer their own container running Apache and PHP, and that’s what you
would use for development. How do you get your code to that container? NFS? SMB?
FTP? Nah, that’s so boring! SVN is a much better tool for this!

The way this worked is that on every push the SVN server would run this PHP
script to copy the changes to the right container based on the committer, the
idea being that everyone only got their their own changes and not other
people’s. You didn’t work off a martin branch – branches are for losers – you
would always commit to the main trunk branch, which was the only branch people
used. The script would look at the committer and copy all the files that commit
touched to that person’s container. Every once in a while you manually updated
your directory to get other people’s changes. Two people working on the same
file at the same time was … unwise.

That PHP script was indecipherable with umpteemt levels of nesting. No one dared
to touch it because it “mostly worked”, some of the time anyway. If it was the
third Tuesday of the month.

Every little change you wanted to see you had to commit. Add a debug print?
Commit. Improve that print? Commit. Found the bug and fixed it? Commit. Remove
that print again? Commit. Fix up the comment? Commit. People had their editors
set up to commit and push to SVN on save. You could easily rack up hundreds of
commits on a single day.

I don’t recall exactly how many people worked on this and for how long, I think
it was about 3-4 developers over a timespan of 2-3 years before I joined, maybe
even shorter. It was a pretty small company. I do distinctly remember reaching
the half a million mark.

It really was a subversion of subversion.

I worked like this for a few days before I told them to give me access to the
server so I could set up SMB because this was just unworkable for me. Aside from
mucking up your SVN log, you need to run a command every time which is just
annoying (I would stick that in a Vim autocmd now, but I didn’t know about those
back then, and since the machine they gave me was Windows I didn’t really know
how to do file watching either). The reason it took me that long was because
this was my first real programming job, and was a bit too insecure to ask sooner
😅 It also made me doubt myself: “Am I not understanding SVN correct? Is this
normal? Do all companies work like this?” Turns out I did understand it
correctly, that it’s not, and that no one does.

I migrated the entire shebang to Vagrant and mercurial about a year later. I
didn’t bother retaining the subversion history. rm -rf .svn; hg init; hg ci -m
'import svn code'; hg push
and I called it a day.


There were some other weird things there as well. Almost all of the company
consisted of very junior people, often with no experience outside of that
company. I had the impression they did it because it was “cheaper”. They also
used interns as “ohh, free labour!”

I guess the lesson here is: make sure you have at least one vaguely senior
developer. Aside from me there was one other guy who certainly wasn’t bad, but
also lacked experience outside of that company. I wasn’t really a senior
either,[1] but was older than most and had already been programming for quite
some time (just not for a living; I did a lot of other jobs before I really made
a career out of programming).

Another lesson is to invest at least a little bit of time in the tooling you
use. I actually really hate learning about “plumbing” like VCS systems because
I’d rather be doing more useful stuff, but even just a little bit of effort goes
a long way and can save you a lot of time. Branches? They had just never heard
of them. Their entire setup would still be weird with branches, but it would be
less weird.

And if stuff is awkward then … maybe you’re doing it wrong? No one liked how
any of this worked, but just accepted it as a fact of life, like how you would
accept that it really sucks that it rains today. That’s an attitude I’ve seen
more often and never really understood: if I see something that’s really
awkward, frustrating, and time-consuming then I want to fix it, but a lot of
people seem happy to just 🤷 and accept it.


Some other tidbits about this job:

  • One of their websites was a “website builder”, like Geocities, except worse.
    The value for the customers was that was in Dutch, with Dutch support (it
    didn’t even support English.[2] Shortly before I joined they decided to
    rewrite it from scratch (which in this case was probably the right decision by
    the way).

    The old version had basically all code in the controllers; it was a hairy and
    messy “big ball of mud”; you’ve probably seen this before, especially if you
    were doing PHP ten years ago.

    So come the rewrite people were like “we gonna fix that!” Sure thing, so what
    they did was create a “callModel” handler in the controller, and you had URLs
    like https://example.com/?callModel=foo&fun=bar and that would just call the
    function cm_bar() method on the foo model.

    They had very clean controllers now. Hardly any code there! Look how elegant!
    Almost all code was in the models, including a lot of HTTP handling stuff.

    At least it was prefixed so you couldn’t call random functions from the
    models: only those with the cm_ prefix worked, but it was just the same
    pattern (if you can call it that) with s/controllers/models/.

  • The same guy who did all of that kept talking about “public variables” in
    JavaScript. I didn’t really understand what he meant with that since
    JavaScript doesn’t really have public/private visibility or even classes, but
    I didn’t work on that project, wasn’t super-familiar with JavaScript at the
    time, so whatever.

    Later I started working on that project and learned that a “public variable”
    was window.varname. This was basically how he solved all scoping problems.

    This is a something I’ve seen more often with people “raised on OOP”: drop
    them in a non-OOP procedural environment and they’re completely lost on how to
    organize their code. He was actually pretty smart, but also very young,
    inexperienced, and not especially well versed in various fundamentals. I
    expect that now, ten years later, he’s probably much better. Just being smart
    is not enough.

  • One of the developers could not code at all. I don’t mean this as “he was a
    bad coder”, I mean he literally didn’t know how to write code. He would
    spend an entire week on something basic and the end result was a 20-line
    function that didn’t work, would never work, and something I would just write
    in half an hour, if not less.

    He was let go after his contract expired and got re-hired at his previous Big
    Enterprise™® company with a raise. This is one reason I always eschewed these
    kind of organisations and mostly worked for smaller companies.

    Nice guy though; he was a lot of fun. Just a bad choice of careers (or maybe
    not, since his salary was a lot higher than mine…)

  • One of the websites we developed was a reseller for second-hand concert
    tickets, it would combine data from TicketMaster, ViaGoGo, and a whole bunch
    of others so you could “compare prices” and we got a commission for every
    sale.

    I learned far too late that this entire industry is little more than a scam,
    and that the people running this show have the ethical capacity of a
    psychopathic starfish. We were no different.

    That entire industry isn’t built on selling something useful to anyone but
    just on duping people in to buying tickets at overinflated prices. A lot of
    times the concerts weren’t sold out at all, but we pretended it was. It was
    just a lie. Pretty much 100% of our traffic came in through AdWords, people
    would search “Something-something concerts tickets”, got an ad from us, and be
    duped in to buying them at ridiculous prices.

    At one point I learned that one company we connected with would literally just
    invent concerts: they would guess “they’re probably doing a tour next year”
    and start “pre-selling” tickets for it at extraordinary prices.

    I very much regret working on this 😑 First job, insecure about your career
    prospects and whether you’re able to get a new job, and it becomes easy to
    rationalize these things to yourself.

  • While most developers felt at least a little bit dirt working on this, the
    owner thought it was all just great. He was an asshole in general, not just in
    his business practices. He was the kind of person that would aggressively
    berate servers in restaurants over minor things, and ended up being the reason
    I quit.

    That was not a very pretty affair: I hit my screen so hard it almost fell off
    my desk 😅 The backstory to that is that he was nowhere to be found while we
    (the two remaining devs) worked hard on a new product for an entire month, and
    when big bosman finally showed up it was all bad and had to be done different.
    I tried to explain the reasons why it was designed the way it was, and his
    only response was “I”m the boss, just do what I say”. A real investor in
    people that man was. The only other remaining developer quit not much later.

    The product failed miserably. From what I heard later it never got a single
    customer. A shame, the calendar UI in particular was very good IMO, and better
    than literally any web-based calendar I could find (but I’m probably biased
    😅). Turns out developing a real product is harder than scamming people.

    He was also suffering from the delusion that he could program. He could not
    program. He would send us clobbered-together stuff anyway, which was
    invariably just hopeless, and when we gently tried to improve it he became
    very defensive to the point of aggression and accused us of being stubborn and
    close-minded.

    And then there was the time I improved some of the stuff he translated to
    English. It’s not like my English is top-notch perfect, but what he wrote was
    so obvious Dunglish. I just improved it a bit before putting it online,
    yet he still took it as serious personal attack when he found out which turned
    in to this huge thing for no real reason 🤷

    The only saving grace was that he worked remotely from Berlin so we didn’t
    have to deal with him in the office daily. We had a nickname for him in the
    office, after a certain failed Austrian painter who also worked out of Berlin.

Footnotes

  1. And arguably, I’m still not; this entire idea where everyone with a few
    years of experience is seen as a “senior” is pretty silly IMHO. It reminds
    me of a certain industry where everyone under 30 is a “teen” and everyone
    over 30 a “MILF”. 

  2. People often seem dismissive of that value i18n, “everyone speaks
    English” Well, depends on who your customer base is. Even in the
    Netherlands where almost everyone speaks some English there are still
    loads of people who are not especially good at it and much more
    comfortable with Dutch. This is probably over half the population, and
    it’s really not just old people. 

Stallman isn’t great, but not the devil

Post Syndicated from arp242.net original https://www.arp242.net/rms.html

So Richard Stallman is back at the FSF, on the board of directors this time
rather than as President. I’m not sure how significant this position is in the
day-to-day operations, but I’m not sure if that’s really important.

How anyone could have thought this was a good idea is beyond me. I’ve long
considered Stallman to be a poor representative of the community, and quite
frankly it baffles me that people do. I’m not sure what the politics were that
lead up to this decision; I had hoped that after Stallman’s departure the FSF
would move forward and shed off some of the Stallmanisms. It seems this hasn’t
happened.

To quickly recap why Stallman is a poor representative:

  • Actively turned many people off because he’s such a twat; on of the better
    examples I know of is from Keith Packard, explaining why X didn’t use the
    GPL in spite of Packard already having used it for some of his projects
    before:

    Richard Stallman, the author of the GPL and quite an interesting individual
    lived at 5405 DEC square, he lived up on the sixth floor I think? Had an
    office up there; he did not have an apartment. And we knew him extremely
    well. He was a challenging individual to get along with. He would regularly
    come down to our offices and ask us, or kind of rail at us, for not using
    the GPL.

    This did not make a positive impression on me; this was my first
    interactions with Richard directly and I remember thinking at the time,
    “this guy is a little, you know, I’m not interesting in talking to him
    because he’s so challenging to work with.”

    And so, we should have listened to him then but we did not because, we know
    him too well, I guess, and met him as well.

    He really was right, we need to remember that!

  • His behaviour against women in particular is creepy. This is not a crime (he
    has, as far as I know, never forced himself on anyone) but not a good quality
    in a community spokesperson, to put it mildly.

  • His personal behaviour in general is … odd, to put it mildly. Now, you can
    be as odd as you’d like as far as I’m concerned, but I also don’t think
    someone like that is a good choice to represent an entire community.

  • Caused a major and entirely avoidable fracture of the community with the Open
    Source
    movement; it’s pretty clear that Stallman, him specifically as a
    person, was a major reason for the OSI people to start their own organisation.
    Stallman still seems to harbour sour grapes over this more than 20 years
    later.

  • Sidetracking of pointless issues (“GNU/Linux”, “you should not be using hacker
    but cracker”, “Open Source misses the point”, etc.), as well as stubbornly
    insisting on the term “Free Software” which is confusing and stands in the way
    if communicating the ideals to the wider world. Everyone will think that an
    article with “Free Software” in the title will be about software free of
    charge. There is a general lack of priorities or pragmatism in almost
    anything Stallman does.

  • Stallman’s views in general on computing are stuck somewhere about 1990.
    Possibly earlier. The “GNU Operating System” (which does not exist, has never
    existed, and most likely will never exist[1]) is not how to advance Free
    Software in modern times. Most people don’t give a rat’s arse which OS they’re
    using to access GitHub, Gmail, Slack, Spotify, Netflix, AirBnB, etc. The world
    has changed and the strategy needs to change – but Stallman is still stuck in
    1990.

  • Insisting on absolute freedom to the detriment of more freedom compared to
    the status quo. No, people don’t want to run a “completely free GNU/Linux
    operating system” if their Bluetooth and webcam doesn’t work and if they can’t
    watch Netflix. That’s just how it is. Deal with it.

    His views are quite frankly ridiculous:

    If [an install fest] upholds the ideals of freedom, by installing only free
    software from 100%-free distros, partly-secret machines won’t become
    entirely functional and the users that bring them will go away disappointed.
    However, if the install fest installs nonfree distros and nonfree software
    which make machines entirely function, it will fail to teach users to say no
    for freedom’s sake. They may learn to like GNU/Linux, but they won’t learn
    what the free software movement stands for.

    [..]

    My new idea is that the install fest could allow the devil to hang around,
    off in a corner of the hall, or the next room. (Actually, a human being
    wearing sign saying “The Devil,” and maybe a toy mask or horns.) The devil
    would offer to install nonfree drivers in the user’s machine to make more
    parts of the computer function, explaining to the user that the cost of this
    is using a nonfree (unjust) program.

    Aside from the huge cringe factor of having someone dressed up as a devil to
    install a driver, the entire premise is profoundly wrong; people can
    appreciate freedom while also not having absolute/maximum freedom. Almost the
    entire community does this, with only a handful of purist exceptions. This
    will accomplish nothing except turn people off.

  • Crippling software out of paranoia; for example Stallman refused to make gcc
    print the AST
    – useful for the Emacs completion and other tooling –
    because he was afraid someone might “abuse” it. He comes off as a gigantic
    twat in that entire thread (e.g. this).

    How do you get people to use Free Software? By making great software people
    want to use
    . Not by offering some shitty crippled product where you can’t do
    some common things you can already do in the propetariy alternatives.


Luckily, the backlash against this has been significant, including an An open
letter to remove Richard M. Stallman from all leadership positions
. Good.
There are many things in the letter I can agree with. If there are parliamentary
hearings surrounding some Free Software law then you would you want Stallman to
represent you? Would you want Stallman to be left alone in a room with some
female lawmaker (especially an attractive one)? I sure wouldn’t; I’d be fearful
he’d leave a poor impression, or outright disgrace the entire community.

But there are also a few things that bother me, as are there in the general
conversation surrounding this topic. Quoting a few things from that letter:

[Stallman] has been a dangerous force in the free software community for a
long time. He has shown himself to be misogynist, ableist, and transphobic,
among other serious accusations of impropriety.

[..]

him and his hurtful and dangerous ideology

[..]

RMS and his brand of intolerance

Yikes! That sounds horrible. But closer examinations of the claims don’t really
bear out these strong claims.

The transphobic claim seems to hinge entirely on his eclectic opinion regarding
gender-neutral pronouns; he prefers some peculiar set of neologisms (“per”
and “pers”) instead of the singular “they”. You can think about his pronoun
suggestion what you will – I feel it’s rather silly and pointless at best – but
a disagreement on how to best change the common use of language to be more
inclusive does not strike me as transphobic. Indeed, it strikes me as the exact
opposite
: he’s willing to spend time and effort to make language more
inclusive
. That he doesn’t do it in the generally accepted way is not
transphobia, a “harmful ideology”, or “dangerous”. It’s really not.

Stallman is well known for his excessive pedantry surrounding language;
he’s not singularly focused on the issue of pronouns and has consistently
posted in favour of trans rights
.

Stallman’s penchant to make people feel unconformable has long been known; and
should hardly come as a surprise to anyone. Many who met him in person did not
leave with an especially good impression of him for one reason or the other. His
behaviour towards women in particular is pretty bad; many anecdotes have been
published and they’re pretty 😬

But … I don’t have the impression that Stallman dislikes or distrusts women,
or sees them as subservient to men. Basically, he’s just creepy. That’s not
good, but is it misogyny? His lack of social skills seem to be broad and not
uniquely directed towards women. He’s just a socially awkward guy in general. I
mean, this is a guy who will, when giving a presentation, will take off his
shoes and socks – which is already a rather weird thing to do – will then
proceed to rub his bare foot – even weirder – only to proceed to appear to
eat something from his foot – wtf wtf wtf?!

If he can’t understand that this is just … wtf, then how can you expect him to
understand that some comment towards a woman is wtf?

Does all of this excuse bad behaviour? No. But it shines a rather different
light on things than phrases such as “misogynist”, “hurtful and dangerous
ideology”, and “his brand of intolerance” do. He hasn’t forced himself on
anyone, as far as I know, and most complaints are about him being creepy.

I don’t think it’s especially controversial to claim that Stallman would have
been diagnosed with some form of autism if he had been born several decades
later. This is not intended as an insult or some such, just to establish him as
a neurodivergent[2] individual. Someone like that is absolutely a poor choice
for a leadership position, but at the same time doesn’t diversity also mean
diversity of neurodivergent people, or at the very least some empathy and
understanding when people’s exhibit a lack of social skills and behaviour
considered creepy?

At what point is there a limit if someone’s neurodiversity drives people away? I
don’t know; there isn’t an easy answer to his. Stallman is clearly unsuitable
for a leadership role; but “misogynist”? I’m not really seeing it in Stallman.

The ableist claim seems to mostly boil down to a comment he posted on his
website regarding abortion of fetuses with Down’s syndrome:

A new noninvasive test for Down’s syndrome will eliminate the small risk of
the current test.

This mind lead more women to get tested, and abort fetuses that have Down’s
syndrome. Let’s hope so!

If you’d like to love and care for a pet that doesn’t have normal human mental
capacity, don’t create a handicapped human being to be your pet. Get a dog or
a parrot. It will appreciate your love, and it will never feel bad for being
less capable than normal humans.

It was later edited to its current version:

A noninvasive test for Down’s syndrome eliminates the small risk of the old
test. This might lead more women to get tested, and abort fetuses that have
Down’s syndrome.

According to Wikipedia, Down’s syndrome is a combination of many kinds of
medical misfortune. Thus, when carrying a fetus that is likely to have Down’s
syndrome, I think the right course of action for the woman is to terminate the
pregnancy.

That choice does right by the potential children that would otherwise likely
be born with grave medical problems and disabilities. As humans, they are
entitled to the capacity that is normal for human beings. I don’t advocate
making rules about the matter, but I think that doing right by your children
includes not intentionally starting them out with less than that.

When children with Down’s syndrome are born, that’s a different situation.
They are human beings and I think they deserve the best possible care.

He also made a few other comments to the effect of “you should abort if you’re
pregnant with a fetus who has Down’s syndrome”.

That last paragraph of the original version was … not great, but the new
version seems okay to me. It is a women’s right to choose to have an abortion,
for any reason, including not wanting to raise a child with Down’s syndrome.
This is already commonplace in practice, with many women choosing to do so.

Labelling an entire person as ableist based only on this – and this is really
the only citation of ableism I’ve been able to find – seems like a stretch, at
best. It was a shitty comment, but he did correct it which is saying a lot in
Stallman ters, as I haven’t seen him do that very often.


Phrases like “a dangerous force”, “dangerous ideology”, and “brand of
intolerance” make it sound like he’s crusading on these kind of issues. Most of
these are just short notes on his personal site which few people seem to read.

Most of the issues surrounding Stallman seem to be about him thinking out loud,
not realizing when it is or is not appropriate to do so, being excessively
pedantic over minor details, or just severally lacking in social skills. This can
be inappropriate, offensive, or creepy – depending on the scenario – but that’s
just something different than being actively transphobic or dangerous. If
someone had read only this letter without any prior knowledge of Stallman they
would be left with the impression that Stallman is some sort of alt-right troll
writing for Breitbart or the like. This is hardly the case.

I think Stallman should resign of newly appointed post, and from GNU as well,
over his personal behaviour in particular. Stallman isn’t some random programmer
working on GNU jizamabob making the occasional awkward comment, he’s the face of
the entire movement. Appointing “a challenging individual to get along with” –
to quote Packard – is not the right person for the job. I feel the rest of the
FSF board has shown spectacular poor judgement in allowing Stallman to come
back.[3]

But I can not, in good conscience, sign the letter as phrased currently. It
vastly exaggerates things to such a degree that I feel it does a gross injustice
to Stallman. It’s grasping at straws to portray Stallman as the most horrible
human being possible, and I don’t think he is that. He seems clueless on some
topics and social interactions, and find him a bit of a twat in general, but
that doesn’t make you a horrible and dangerous person. I find the letter lacking
in empathy and deeply unkind.


In short, I feel Stallman’s aptitudes do not apply well for any sort of
leadership position and I would rather not have him represent the community I’m
a part of, even if he did start it and made many valuable contributions to it.
Just starting something does not give you perpetual ownership over it, and in
spite of all his hard work I feel he’s been very detrimental to the movement and
has been a net-negative contributor for a while. A wiser version of Stallman
would have realized his shortcomings and stepped down some time in the late 80s
to let someone else be the public face.

Overall I feel he’s not exactly a shining example of the human species, but then
again I’m probably not either. He is not the devil and the horrible person that
the letter makes him out to be. None of these exaggerations are even needed to
make the case that he should be removed, which makes it even worse.

It’s a shame, because instead of moving forward with Free Software we’re
debating this. Arguably I should just let this go as Stallman isn’t really
worth defending IMO, but on the other hand being unfair is being unfair, no
matter who the target may be.

Footnotes

  1. A set of commandline utilities, libc, and a compiler are not an
    operating system. Linux (the kernel) is not the “last missing piece of
    the GNU operating system”. 

  2. Neurodivergency is, in a nutshell, the idea that “normal” is a wide
    range, and that not everyone who doesn’t fits with the majority should be
    labelled as “there is something wrong with them” such as autism. While
    some some people take this a bit too far (not every autist is
    high-functioning; for some it really is debilitating) I think there’s
    something to this. 

  3. I guess this shouldn’t come as that much of a surprise, as the only people
    willing and able to hang around Stallman’s FSF were probably similar-ish
    people. It’s probably time to just give up on the FSF and move forward
    with some new initiative (OSI is crap too, for different reasons). I swear
    we’ve got to be the most dysfunctional community ever. 

Go is not an easy language

Post Syndicated from arp242.net original https://www.arp242.net/go-easy.html

Go is not an easy programming language. It is simple in many ways: the syntax
is simple, most of the semantics are simple. But a language is more than just
syntax; it’s about doing useful stuff. And doing useful stuff is not always
easy in Go.

Turns out that combining all those simple features in a way to do something
useful can be tricky. How do you remove an item from an array in Ruby?
list.delete_at(i). And remove entries by value? list.delete(value). Pretty
easy, yeah?

In Go it’s … less easy; to remove the index i you need to do:

list = append(list[:i], list[i+1:]...)

And to remove the value v you’ll need to use a loop:

n := 0
for _, l := range list {
    if l != v {
        list[n] = l
        n++
    }
}
list = list[:n]

Is this unacceptably hard? Not really; I think most programmers can figure out
what the above does even without prior Go experience. But it’s not exactly
easy either. I’m usually lazy and copy these kind of things from the Slice
Tricks
page because I want to focus on actually solving the problem at
hand, rather than plumbing like this.

It’s also easy to get it (subtly) wrong or suboptimal, especially for less
experienced programmers. For example compare the above to copying to a new array
and copying to a new pre-allocated array (make([]string, 0, len(list))):

InPlace             116 ns/op      0 B/op   0 allocs/op
NewArrayPreAlloc    525 ns/op    896 B/op   1 allocs/op
NewArray           1529 ns/op   2040 B/op   8 allocs/op

While 1529ns is still plenty fast enough for many use cases and isn’t something
to excessively worry about, there are plenty of cases where these things do
matter and having the guarantee to always use the best possible algorithm with
list.delete(value) has some value.


Goroutines are another good example. “Look how is it is to start a goroutine!
Just add go and you’re done!” Well, yes; you’re done until you have five
million of those running at the same time and then you’re left wondering where
all your memory went, and it’s not hard to “leak” goroutines by accident either.

There are a number of patterns to limit the number of goroutines, and none of
them are exactly easy. A simple example might be something like:

var (
	jobs    = 20                 // Run 20 jobs in total.
	running = make(chan bool, 3) // Limit concurrent jobs to 3.
	done    = make(chan bool)    // Signal that all jobs are done.
)

for i := 1; i <= jobs; i++ {
	running <- true // Fill running; this will block and wait if it's already full.

	// Start a job.
	go func(i int) {
		defer func() {
			<-running      // Drain running so new jobs can be added.
			if i == jobs { // Last job, signal that we're done.
				done <- true
			}
		}()

		// "do work"
		time.Sleep(1 * time.Second)
		fmt.Println(i)
	}(i)
}

<-done // Wait until all jobs are done.
fmt.Println("done")

There’s a reason I annotated this with some comments: for people not intimately
familiar with Go this may take some effort to understand. This also won’t ensure
that the numbers are printed in order (which may or may not be a requirement).

Go’s concurrency primitives may be simple and easy to use, but combining them to
solve common real-world scenarios is a lot less simple. The original version of
the above example was actually incorrect.


In Simple Made Easy Rich Hickey argues that we shouldn’t confuse “simple”
with “it’s easy to write”: just because you can do something useful in one or
two lines doesn’t mean the underlying concepts – and therefore the entire
program – are “simple” as in “simple to understand”.

I feel there is some wisdom in this; in most cases we shouldn’t sacrifice
“simple” for “easy”, but that doesn’t mean we can’t think at all about how to
make things easier. Just because concepts are simple doesn’t mean they’re easy
to use, can’t be misused, or can’t be used in ways that lead to (subtle) bugs.
Pushing Hickey’s argument to the extreme we’d end up with something like
Brainfuck and that would of course be silly.

Ideally a language should reduce the cognitive load required to reason about its
behaviour; there are many ways to increase this cognitive load: complex
intertwined language features is one of them, and getting “distracted” by
implementing fairly basic things from those simple concepts is another: it’s
another block of code I need to reason about. While I’m not overly concerned
about code formatting or syntax choices, I do think it can matter to reduce this
cognitive load when reading code.

The lack of generics probably plays some part here; implementing a slices
package which does these kind of things in a generic way is hard right now.
Generics makes this possible and also makes things more complex (more language
features are used), but they also make things easier and, arguably, less complex
on other fronts.[1]


Are these insurmountable problems? No. I still use (and like) Go after all. But
I also don’t think that Go is a language that you “could pick up in ~5-10
minutes”, which was the comment that prompted this post; a sentiment I’ve seen
expressed many times.

As a corollary to all of the above; learning the language isn’t just about
learning the syntax to write your ifs and fors; it’s about learning a way of
thinking. I’ve seen many people coming from Python or C♯ try to shoehorn
concepts or patterns from those languages in Go. Common ones include using
struct embedding as inheritance, panics as exceptions, “pseudo-dynamic
programming” with interface{}, and so forth. It rarely ends well, if ever.

I did this as well when I was writing my first Go program; it’s only natural.
And when I started as a Ruby programmed I tried to write Python code in Ruby
(although this works a bit better as the languages are more similar, but there
are still plenty of odd things you can do such as using for loops).

This is why I don’t like it when people get redirected to the Tour of Go to
“learn the language”, as it just teaches basic syntax and little more. It’s nice
as a little, well, tour to get a bit of a feel of the language and see how it
roughly works and what it can roughly do, but it’s ill-suited to actually learn
the language.

Footnotes

  1. Contrary to popular belief the Go team was never “against” generics;
    I’ve seen many comments to the effect of “the Go team doesn’t think
    generics are useful”, but this was never the case. 

Downsides of working remotely

Post Syndicated from arp242.net original https://www.arp242.net/remote.html

I love remote work, and I’ve been working remotely for the last five years, but
I think there are some serious downsides too. In spite what all the “remote work
is the future!” articles of the last year claim, it’s not all perfect, or
suitable for everyone.


Working remotely means less socializing with your coworkers: fewer chats over
the coffee machine, you don’t go out to the pub for a pint, that sort of thing.
I think you shouldn’t underestimate the value of these kind of things: it’s just
a lot easier to work with people you get along with well socially.

Conflicts will inevitably happen; it’s just part of human nature. But the better
we get along socially the higher the chance is that we can resolve things
amicably without thinking (though not saying) “they’re just a bloody idiot!”, or
being left with a feeling of lingering resentment. The more you’ve socialized
with someone, the more “social goodwill” you’ve got to fall back on in times of
conflict.

It’s hard to communicate over text well. Even if we ignore international
cultural differences, the lack of body language and direct feedback makes it
easy for misunderstandings and miscommunication to happen. We all phrase things
awkwardly or too harshly on occasion, and we can all lose our temper a bit (some
more frequently than others). This is normal, but when you write it in text
it’s there; there is no opportunity to immediately correct it based on your
conversation partner’s feedback; there is no body language to clarify the
meaning, there is no opportunity to immediately add nuance. The recipient will
just be left steaming over your shitty remark. It’s much easier for things to
escalate.

Video conferencing is a bit better, but in my opinion still a poor substitute
over an actual conversation, and being in video chats all day also isn’t really
an option. In practice, a lot of communication will happen over text (chats,
emails, issues, etc.).


Especially for more junior people the lack of feedback and guidance can be a
very detrimental. You can’t just pop in and ask how they are doing, or sit next
to them and guide them through some things. When I guided an intern years ago I
would regularly just turn around (he was sitting behind me) and look on his
screen to see what he was doing and how he was doing it, and try to offer some
helpful guidance if needed. I felt it was very helpful for him (at least, I hope
so, although last I heard he didn’t pursue his career in IT and is the manager
of a McDonald’s now). This sort of thing is much harder to do well remotely.

The lack of guidance for more junior programmers is already a big problem in the
IT industry specifically because for decades there have been many more junior
people than senior people, and remote work makes this worse. Generally I
wouldn’t recommend remote work for entry-level and junior jobs.


Another big downside is that remote work can get lonely; most of is don’t hang
out with friends every single day, and it’s not uncommon to not have a serious
conversation with anyone for days on end, especially if you live alone.
Depending on where you live and your personality, you can restructure your
social life a bit to compensate for this, which is what I ended up doing, but
it’s something that takes some amount of effort and isn’t necessarily for
everyone.

For this reason alone, I feel that remote work on a grand scale might not really
be a good thing. Western society in general seems to have a bit of an issue with
social contact – how many of us talk to our neighbours?

This isn’t something that can be ascribed to a single cause, or something I have
an answer to – but we already tore down the church and village communities for
many, and I’m not sure if it’s a good idea to tear down the “office community”
too. Not that I think these communities were necessarily the best (I’m not
religious), but tearing it all down without … something … as a replacement
may not be wise.


Don’t get me wrong, I love working remote: I have fewer headaches (why are so
many offices so badly ventilated?!), don’t have to mentally block out noise all
day long (which I find very draining), and can make my own schedule. In general,
I’m much more productive and happier.

But in the torrent of enthusiasm of remote work in the COVID era, it’s probably
good to focus on some downsides, too. Personally, I’m skeptical that it’s “the
future of work” and will introduce a paradigm shift. And if it does, we should
be acutely aware of some of the downsides.

Other people have listed some other downsides, such as a lack of structure or
work/life balance. These have not been any problems for me personally (I very
much like the lack of structure, and feel the “eight hours bum-on-seat” model
is equally problematic). Some other perspectives:

Bitmasks for nicer APIs

Post Syndicated from arp242.net original https://www.arp242.net/bitmask.html

Bitmasks is one of those things where the basic idea is simple to understand:
it’s just 0s and 1s being toggled on and off. But actually “having it click”
to the point where it’s easy to work with can be a bit trickier. At least, it is
(or rather, was) for me 😅

With a bitmask you hide (or “mask”) certain bits of a number, which can be
useful for various things as we’ll see later on. There are two reasons one might
use bitmasks: for efficiency or for nicer APIs. Efficiency is rarely an issue
except for some embedded or specialized use cases, but everyone likes nice APIs,
so this is about that.


A while ago I added colouring support to my little zli library. Adding
colours to your terminal is not very hard as such, just print an escape code:

fmt.Println("\x1b[34mRed text!\x1b[0m")

But a library makes this a bit easier. There’s already a bunch of libraries out
there for Go specifically, the most popular being Fatih Arslan’s color:

color.New(color.FgRed).Add(color.Bold).Add(color.BgCyan).Println("bold red")

This is stored as:

type (
    Attribute int
    Color     struct { params  []Attribute }
)

I wanted a simple way to add some colouring, which looks a bit nicer than the
method chain in the color library, and eventually figured out you don’t need a
[]int to store all the different attributes but that a single uint64 will do
as well:

zli.Colorf("bold red", zli.Red | zli.Bold | zli.Cyan.Bg())

// Or alternatively, use Color.String():
fmt.Printf("%sbold red%s\n", zli.Red|zli.Bold|zli.Cyan.Bg(), zli.Reset)

Which in my eyes looks a bit nicer than Fatih’s library, and also makes it
easier to add 256 and true colour support.

All of the below can be used in any language by the way, and little of this is
specific to Go. You will need Go 1.13 or newer for the binary literals to work.


Here’s how zli stores all of this in a uint64:

                                   fg true, 256, 16 color mode ─┬──┐
                                bg true, 256, 16 color mode ─┬─┐│  │
                                                             │ ││  │┌── parsing error
 ┌───── bg color ────────────┐ ┌───── fg color ────────────┐ │ ││  ││┌─ term attr
 v                           v v                           v v vv  vvv         v
 0000_0000 0000_0000 0000_0000 0000_0000 0000_0000 0000_0000 0000_0000 0000_0000
 ^         ^         ^         ^         ^         ^         ^         ^
64        56        48        40        32        24        16         8

I’ll go over it in detail later, but in short (from right to left):

  • The first 9 bits are flags for the basic terminal attributes such as bold,
    italic, etc.

  • The next bit is to signal a parsing error for true colour codes (e.g. #123123).

  • There are 3 flags for the foreground and background colour each to signal that
    a colour should be applied, and how it should be interpreted (there are 3
    different ways to set the colour: 16-colour, 256-colour, and 24-bit “true
    colour”, which use different escape codes).

  • The colours for the foreground and background are stored separately, because
    you can apply both a foreground and background. These are 24-bit numbers.

  • A value of 0 is reset.

With this, you can make any combination of the common text attributes, the above
example:

zli.Colorf("bold red", zli.Red | zli.Bold | zli.Cyan.Bg())

Would be the following in binary layout:

                                              fg 16 color mode ────┐
                                           bg 16 color mode ───┐   │
                                                               │   │        bold
                bg color ─┬──┐                fg color ─┬──┐   │   │           │
                          v  v                          v  v   v   v           v
 0000_0000 0000_0000 0000_0110 0000_0000 0000_0000 0000_0001 0010_0100 0000_0001
 ^         ^         ^         ^         ^         ^         ^         ^
64        56        48        40        32        24        16         8


We need to go through several steps to actually do something meaningful with
this. First, we want to get all the flag values (the first 24 bits); a “flag” is
a bit being set to true (1) or false (0).

const (
    Bold         = 0b0_0000_0001
    Faint        = 0b0_0000_0010
    Italic       = 0b0_0000_0100
    Underline    = 0b0_0000_1000
    BlinkSlow    = 0b0_0001_0000
    BlinkRapid   = 0b0_0010_0000
    ReverseVideo = 0b0_0100_0000
    Concealed    = 0b0_1000_0000
    CrossedOut   = 0b1_0000_0000
)

func applyColor(c uint64) {
    if c & Bold != 0 {
        // Write escape code for bold
    }
    if c & Faint != 0 {
        // Write escape code for faint
    }
    // etc.
}

& is the bitwise AND operator. It works just as the more familiar && except
that it operates on every individual bit where 0 is false and 1 is true.
The end result will be 1 if both bits are “true” (1). An example with just
four bits:

0011 & 0101 = 0001

This can be thought of as four separate operations (from left to right):

0 AND 0 = 0      both false
0 AND 1 = 0      first value is false, so the end result is false
1 AND 0 = 0      second value is false
1 AND 1 = 1      both true

So what if c & Bold != 0 does is check if the “bold bit” is set:

Only bold set:
0 0000 0001 & 0 0000 0001 = 0 0000 0001

Underline bit set:
0 0000 1000 & 0 0000 0001 = 0 0000 0000      0 since there are no cases of "1 AND 1"

Bold and underline bits set:
0 0000 1001 & 0 0000 0001 = 0 0000 0001      Only "bold AND bold" is "1 AND 1"

As you can see, c & Bold != 0 could also be written as c & Bold == Bold.


The colours themselves are stored as a regular number like any other, except
that they’re “offset” a number of bits. To get the actual number value we need
to clear all the bits we don’t care about, and shift it all to the right:

const (
    colorOffsetFg   = 16

    colorMode16Fg   = 0b0000_0100_0000_0000
    colorMode256Fg  = 0b0000_1000_0000_0000
    colorModeTrueFg = 0b0001_0000_0000_0000

    maskFg          = 0b00000000_00000000_00000000_11111111_11111111_11111111_00000000_00000000
)

func getColor(c uint64) {
    if c & colorMode16Fg != 0  {
        cc := (c & maskFg) >> colorOffsetFg
        // ..write escape code for this color..
    }
}

First we check if the “16 colour mode” flag is set using the same method as the
terminal attributes, and then we AND it with maskFg to clear all the bits we
don’t care about:

                                   fg true, 256, 16 color mode ─┬──┐
                                bg true, 256, 16 color mode ─┬─┐│  │
                                                             │ ││  │┌── parsing error
 ┌───── bg color ────────────┐ ┌───── fg color ────────────┐ │ ││  ││┌─ term attr
 v                           v v                           v v vv  vvv         v
 0000_0000 0000_0000 0000_0110 0000_0000 0000_0000 0000_0001 0010_0100 0000_1001
AND maskFg
 0000_0000_0000_0000_0000_0000_1111_1111_1111_1111_1111_1111_0000_0000_0000_0000
=
 0000_0000 0000_0000 0000_0000 0000_0000 0000_0000 0000_0001 0000_0000 0000_0000
 ^         ^         ^         ^         ^         ^         ^         ^
64        56        48        40        32        24        16         8

After the AND operation we’re left with just the 24 bits we care about, and
everything else is set to 0. To get a normal number from this we need to shift
the bits to the right with >>:

1010 >> 1 = 0101    All bits shifted one position to the right.
1010 >> 2 = 0010    Shift two, note that one bit gets discarded.

Instead of >> 16 you can also subtract 65535 (a 16-bit number): (c &
maskFg) - 65535
. The end result is the same, but bit shifts are much easier to
reason about in this context.

We repeat this for the background colour (except that we shift everything 40
bits to the right). The background is actually a bit easier since we don’t need
to AND anything to clear bits, as all the bits to the right will just be
discarded:

cc := c >> ColorOffsetBg

For 256 and “true” 24-bit colours we do the same, except that we need to send
different escape codes for them, which is a detail that doesn’t really matter
for this explainer about bitmasks.


To set the background colour we use the Bg() function to transforms a
foreground colour to a background one. This avoids having to define BgCyan
constants like Fatih’s library, and makes working with 256 and true colour
easier.

const (
    colorMode16Fg   = 0b00000_0100_0000_0000
    colorMode16Bg   = 0b0010_0000_0000_0000

    maskFg          = 0b00000000_00000000_00000000_11111111_11111111_11111111_00000000_00000000
)

func Bg(c uint64) uint64 {
    if c & colorMode16Fg != 0 {
        c = c ^ colorMode16Fg | colorMode16Bg
    }
    return (c &^ maskFg) | (c & maskFg << 24)
}

First we check if the foreground colour flags is set; if it is then move that
bit to the corresponding background flag.

| is the OR operator; this works like || except on individual bits like in
the above example for &. Note that unlike || it won’t stop if the first
condition is false/0: if any of the two values are 1 the end result will be
1:

0 OR 0 = 0      both false
0 OR 1 = 1      second value is true, so end result is true
1 OR 0 = 1      first value is true
1 OR 1 = 1      both true

0011 | 0101 = 0111

^ is the “exclusive or”, or XOR, operator. It’s similar to OR except that it
only outputs 1 if exactly one value is 1, and not if both are:

0 XOR 0 = 0      both false
0 XOR 1 = 1      second value is true, so end result is true
1 XOR 0 = 1      first value is true
1 XOR 1 = 0      both true, so result is 0

0011 ^ 0101 = 0101

Putting both together, c ^ colorMode16Fg clears the foreground flag and |
colorMode16Bg
sets the background flag.

The last line moves the bits from the foreground colour to the background
colour:

return (c &^ maskFg) | (c & maskFg << 24)

&^ is “AND NOT”: these are two operations: first it will inverse the right
side (“NOT”) and then ANDs the result. So in our example the maskFg value is
inversed:

 0000_0000_0000_0000_0000_0000_1111_1111_1111_1111_1111_1111_0000_0000_0000_0000
NOT
 1111_1111_1111_1111_1111_1111_0000_0000_0000_0000_0000_0000_1111_1111_1111_1111

We then used this inversed maskFg value to clear the foreground colour,
leaving everything else intact:

 1111_1111_1111_1111_1111_1111_0000_0000_0000_0000_0000_0000_1111_1111_1111_1111
AND
 0000_0000 0000_0000 0000_0110 0000_0000 0000_0000 0000_0001 0010_0100 0000_1001
=
 0000_0000 0000_0000 0000_0110 0000_0000 0000_0000 0000_0000 0010_0100 0000_1001
 ^         ^         ^         ^         ^         ^         ^         ^
64        56        48        40        32        24        16         8

C and most other languages don’t have this operator and have ~ for NOT (which
Go doesn’t have), so the above would be (c & ~maskFg) in most other languages.

Finally, we set the background colour by clearing all bits that are not part of
the foreground colour, shifting them to the correct place, and ORing this to get
the final result.


I skipped a number of implementation details in the above example for clarity,
especially for people not familiar with Go. The full code is of course
available
. Putting all of
this together gives a fairly nice API IMHO in about 200 lines of code which
mostly avoids boilerplateism.

I only showed the 16-bit colours in the examples, in reality most of this is
duplicated for 256 and true colours as well. It’s all the same logic, just with
different values. I also skipped over the details of terminal colour codes, as
this article isn’t really about that.

In many of the above examples I used binary literals for the constants, and this
seemed the best way to communicate how it all works for this article. This isn’t
necessarily the best or easiest way to write things in actual code, especially
not for such large numbers. In the actual code it looks like:

const (
    ColorOffsetFg = 16
    ColorOffsetBg = 40
)

const (
    maskFg Color = (256*256*256 - 1) << ColorOffsetFg
    maskBg Color = maskFg << (ColorOffsetBg - ColorOffsetFg)
)

// Basic terminal attributes.
const (
    Reset Color = 0
    Bold  Color = 1 << (iota - 1)
    Faint
    // ...
)

Figuring out how this works is left as an exercise for the reader 🙂

Another thing that might be useful is a little helper function to print a number
as binary; it helps visualise things if you’re confused:

func bin(c uint64) {
    reBin := regexp.MustCompile(`([01])([01])([01])([01])([01])([01])([01])([01])`)
    reverse := func(s string) string {
        runes := []rune(s)
        for i, j := 0, len(runes)-1; i < j; i, j = i+1, j-1 {
            runes[i], runes[j] = runes[j], runes[i]
        }
        return string(runes)
    }
    fmt.Printf("%[2]s → %[1]d\n", c,
        reverse(reBin.ReplaceAllString(reverse(fmt.Sprintf("%064b", c)),
            `$1$2$3${4}_$5$6$7$8 `)))
}

I put a slighly more advanced version of this at
zgo.at/zstd/zfmt.Binary.

You can also write a little wrapper to make things a bit easier:

type Bitflag64 uint64 uint64

func (f Bitflag64) Has(flag Bitflag64) bool { return f&flag != 0 }
func (f *Bitflag64) Set(flag Bitflag64)     { *f = *f | flag }
func (f *Bitflag64) Clear(flag Bitflag64)   { *f = *f &^ flag }
func (f *Bitflag64) Toggle(flag Bitflag64)  { *f = *f ^ flag }

If you need more than 64 bits then not all is lost; you can use type thingy
[2]uint64
.


Here’s an example where I did it wrong:

type APITokenPermissions struct {
    Count      bool 
    Export     bool 
    SiteRead   bool 
    SiteCreate bool 
    SiteUpdate bool 
}

This records the permissions for an API token the user creates. Looks nice, but
how do you check that only Count is set?

if p.Count && !p.Export && !p.SiteRead && !p.SiteCreate && !p.SiteUpdate { .. }

Ugh; not very nice, and neither is checking if multiple permissions are set:

if perm.Export && perm.SiteRead && perm.SiteCreate && perm.SiteUpdate { .. }

Had I stored it as a bitmask instead, it would have been easier:

if perm & Count == 0 { .. }

const permSomething = perm.Export | perm.SiteRead | perm.SiteCreate | perm.SiteUpdate
if perm & permEndpointSomething == 0 { .. }

No one likes functions with these kind of signatures either:

f(false, false, true)
f(true, false, true)

But with a bitmask things can look a lot nicer:

const (
    AddWarpdrive   = 0b0001
    AddTractorBeam = 0b0010
    AddPhasers     = 0b0100
)

f(AddPhasers)
f(AddWarpdrive | AddPhasers)

Stupid light software

Post Syndicated from arp242.net original https://www.arp242.net/stupid-light.html

The ultralight hiking community is – as you may gather from the name – very
focused on ultralight equipment and minimalism. Turns out that saving a bit of
weight ten times actually adds up to a significant weight savings, making hikes
– especially longer ones of several days or weeks – a lot more comfortable.

There’s also the concept of stupid light: when you save weight to the
point of stupidity. You won’t be comfortable, you’ll miss stuff you need, your
equipment will be too fragile.

In software, I try to avoid dependencies, needless features, and complexity to
keep things reasonably lightweight. Software is already hard to start with, and
the more of it you have the harder it gets. But you need to be careful not to
make it stupid light.

It’s a good idea to avoid a database if you don’t need one; often flat text
files or storing data in memory works just as well. But at the same time
databases do offer some advantages: it’s structured and it deals with file
locking and atomicity. A younger me would avoid databases at all costs and in
hindsight that was just stupid light in some cases. You don’t need to
immediately jump to PostgreSQL or MariaDB either, and there are many
intermediate solutions, SQLite being the best known, but SQLite can also be
stupid light
in some use cases.

Including a huge library may be overkill for what you need from it; you can
perhaps just copy that one function out of there, or reimplement your own if
it’s simple enough. But this only a good idea if you can do it well and ensure
it’s actually correct (are you sure all edge cases are handled correctly?)
Otherwise it just becomes stupid light.

I’ve seen several people write their own translation services. All of them were
lighter than gettext. And they were also completely terrible and stupid light.

Adding features or API interfaces can come with significant costs in maintenance
and complexity. But if you’re sacrificing UX and people need to work around the
lack of features then you app or API just becomes stupid light.

It’s all about a certain amount of balance. Lightweight is good, bloated is bad,
and stupid light is just as bad as bloated, or perhaps even worse since bloated
software usually at least allowed you to accomplish the task whereas stupid
light may prevent you from doing so.


I won’t list any examples here as I don’t really want to call out people’s work
as “stupid”, especially if they’re hobby projects people work on in their spare
time. I can think of a few examples, but does adding them really add any value?
I’m not so sure that it does. Arguably “stupid light” isn’t really the best
wording here – the original usage in hiking context is mostly a self-deprecating
one – and a different one without “stupid” would be better, but I couldn’t
really think of anything better 🤷 And it does have a nice ring to it.

Stupid light isn’t something you can measure and define exactly, just like you
can’t measure and exactly define “bloat”. It depends on a lot of factors. But
just as it’s worth thinking about “do we really need this?” to avoid bloat, it’s
also worth thinking about “can we really do without this?” to avoid stupid
light.

Empathy is required for democracy

Post Syndicated from arp242.net original https://www.arp242.net/empathy.html

As humans, we’re fundamentally “selfish”, for lack of a better term, as our own
feelings and experiences are our baseline. We’re also fundamentally empathic,
which is why most of us aren’t assholes.

It’s fairly easy to be empathic with people close to you: your family, friends,
coworkers, and other people you directly interact with. You’re aware of their
feelings, and much of the time you adjust your views and behaviour accordingly.
In my observation a lot of conflicts that happen – fights with your spouse,
coworker, etc. – are a failure of empathy: you don’t fully understand the other
person’s feelings and perspective. Some people have a structural failure of
empathy and we call those people assholes.

Empathy gets harder the further you go away from your immediate circle. To be
empathic towards people you’ve never met or whose lifestyle is radically
different from you requires some amount of effort; you need to be aware of their
circumstances and feelings to emphasize with them. This is one reason why
fiction is quite important: it trains empathy.

The current political situation seems to be spiralling out of control in the
United States; but it’s hardly just the US where this is happening, I see it in
all other countries where I’m reasonably familiar with the politics as well: the
Netherlands, Belgium, and the UK. It’s just more extreme in the US, which is
mostly due to how the political system works.

There is a lot that can be said about all of this this, but one of the most
important core reasons is a structural failure of empathy. This is something
I’ve seen on all sides of the political spectrum, it just takes different forms.

Empathy comes in two forms: you can have emotional empathy, “I understand why
you feel like this”
, and intellectual empathy, “I understand why you could
hold such a viewpoint (even though I don’t agree)”
. Both are important; people
don’t feel angry for no reason, and they don’t vote a certain way without
reasons either. For a democracy to function there needs to be some form of
genuine understanding – or empathy – across the population.


On the right you see things such as “BLM is just a terror organisation”. Framing
it like this “inoculates” people against developing empathy. I mean, would you
care to listen to Osama Bin Laden to develop empathy for him? Probably not.

On the left, anyone who is opposed to BLM or affirmative action is a racist.
It’s a massive failure of empathy to frame things like that, and just as most
people wouldn’t pay much attention to what a terrorist has to say, most people
also wouldn’t pay much attention to what a racist has to say.

There are plenty of stories of former neo-Nazis apologizing with tears in their
eyes for their past actions. Some people really are just bad, and some people
are just good. Most of us? Somewhere in between and a complex mix of both. Our
environment matters a lot. European, Chinese, or Indian people are not
fundamentally different from Americans as human beings, yet their attitudes,
behaviour and societies are. I’ve lived in a bunch of different countries over
the past few years, and the differences are quite striking, even within
neighbouring countries in Europe.

Germany went from being a democracy to Nazi Germany and back to being a
democracy all in the span of just 15 years. This is a particularly striking
example and entire books have been written about the sequence of events that
made this happen, but it’s an important lesson to not underestimate external
influences on people’s actions.

There are a few factors that come in to play here: the rise of partisan
mainstream media is an important part. This is definitely something where things
are worse on the right with stuff like Fox News, The Daily Mail, spurious claims
of “liberal bias”, “fake news CNN”, and so forth. This also exists on the left,
but less so.

While Vox seems concerned about how Chapo Trap House will hurt Sanders’
chances
, and relativities the content as “mockery”, “insults”, and
“conventional punditry and political analysis leavened by a heavy dose of
irony”. But if you look at the actual content then that seems like a
rather curious take on things. Does “haha, that was only ironic” sound familiar
to you? And this isn’t some obscure podcast no one heard of, it’s one of the
biggest ones out there.

I got permanently banned from /r/FuckTheAltRight a few years ago for pointing
out that a “punch Trump” protest was misguided and pointless
under rule 1:
“No Alt-Right/Nazis”.

Overall, I feel things are in pretty bad shape. It’s pointless to argue which
side is worse or who started it. It’s really bad everywhere, and because “the
other side is worse” is an absolute piss-poor defence for tolerating – or even
doing – the same thing on your side. People love to throw “false equivalence”
around as some sort of defence, and that’s just missing the bigger picture.


How did we get here? I think there are a few reasons for this. When a large
group of people do something very clearly wrong then there is almost always
something more going on than just “there is a problem with those people on an
individual level”, because the baseline of “bad people”, so to speak, just isn’t
that large.

Some of the reasons include:

  • The media landscape changed significantly, and people get stuck in “echo
    chambers”.

    I don’t really like the term “echo chamber” as it’s misused so often. There is
    nothing wrong with engaging on a Donald Trump community or a socialist
    community. It’s normal and natural to want to talk to like-minded people
    without having to explain and defend your views every other comment. However,
    if all you do is engage with like-minded people and never hear anything
    else … yeah, then there’s a problem. The increasing partisan nature of
    media, as well as social media, are a big factor in this. I’ll expand on this
    in another post later this week.

  • Certain actors encourage a lack of empathy for personal gain; Donald Trump is
    an obvious example of this, as are his stooges in the form of Tucker Carlson
    and the like. Tucker Carlson is not an idiot, I have every reason to believe
    that he knows exactly what he is doing: adding more drops to the empathy
    inoculation for personal gain, because if you make sure people distrust – even
    hate – the other side then they’ll remain loyal to you.

    No one wants to be seen as a racist or sexist, so calling people you don’t
    like racist is a simple way to “win” an argument, shut people up, and get an
    audience with some outrage-driven piece. “Standing against racism” is a noble
    cause, but just because you have the appearance of doing so doesn’t mean
    you’re actually doing it.

  • A lot of regular people feel let down and forgotten; I don’t think it’s a
    coincidence that in 2008 Obama – generally seen as a bit of an “outsider”
    candidate – got elected on a campaign of “Change”, and that Trump – another
    outsider – got elected in 2016 on a campaign of “Make America Great Again”.
    These seem like the same message to me, just phrased differently. I see the
    same pattern in other countries, where people vote on similar-ish candidates
    and parties (it’s largely due to the differences in political systems that
    makes the US situation so extreme).

    I don’t think there is anything wrong with the basic idea of capitalism, but
    in its current incarnation it’s letting a lot of people down. Almost any good
    idea driven to its extremes is stupid, and capitalism is no exception. Yet
    over the past few decades the moderations to file off capitalism’s sharp edges
    have slowly eroded. This, again, is more extreme in the US, but it’s happened
    everywhere.

I’m not sure how to get out of this mess, since a lot of the problems are
complicated without easy solutions. Essentially, a lot has to do with the
breakdown of out democratic and economic institutions. These are not simple
problems.

But an active effort to understand and empathize with people needs to happen
first, instead of treating everyone you don’t exactly agree with as an enemy. A
democracy only works if everyone acknowledges everyone’s legitimacy, otherwise
it just becomes bickering over small issues.

An API is a user interface

Post Syndicated from arp242.net original https://www.arp242.net/api-ux.html

An API is a user interface for programmers and is essentially no different from
a graphical user interface, command-line user interface, or any other interface
a human (“user”) is expected to work with. Whenever you create a publicly
callable function you’re creating a user interface. Programmers are users, too.

This applies for any API: libX11, libpng, Ruby on Rails (good UX is a major
factor for Rails’ success), a REST API, etc.

A library exists of two parts: implementation and exposed API. The
implementation is all about doing stuff and interacting with the computer,
whereas the exposed API is about giving a human access to this, preferably in a
convenient way that makes it easy to understand, and making it hard to get
things wrong.

This may sound rather obvious, but in my experience this often seems forgotten.
The world is full of badly documented clunky APIs that give confusing errors (or
no errors!) to prove it.

Whenever I design a public package, module, or class I tend to start by writing
a few basic usage examples and documenting it. This first draft won’t be perfect
and while writing the implementation I keep updating the examples and
documentation to iterate on what works and axe what doesn’t. This is kind of
like TDD, except that it “tests” the UX rather than the implementation. Call it
Example Driven Development if you will.

This is similar to sketching a basic mock UI for a GUI and avoids “oh, we need
to be able to do that too” half-way through building your UI, leading to awkward
clunky UI elements added willy-nilly as an afterthought.

In code reviews the first questions I usually have are things like “is this API
easy to use?”, “Is it consistent?”, “can we extend it in the future so it won’t
be ugly?”, “is it documented, and is the documentation comprehensible?”.
Sometimes I’ll even go as far as trying to write a simple example to see if
there are any problems and if it “feels” right. Only if this part is settled do
I move on to reviewing the correctness of the actual implementation.


I’m not going to list specific examples or tips here; it really depends on the
environment, intended audience (kernel programmers are not Rails programmers),
and most of all: what you’re doing.

Sometimes a single function with five parameters would be bad UX, whereas in
other cases it might be a good option, if all five really are mandatory for
example, or if you use Python and have named parameters. In other cases, it
makes more sense to have five functions which accepts a single parameter.

There usually isn’t “one right way”. If everyone started treating APIs as user
interfaces instead of “oh, it’s just for developers, they will figure it out”
then we’ll be 90% there.

That being said, the most useful general piece of advice I know of is John
Ousterhout’s concept of deep modules: modules that provide large functionality
with simple interfaces. Depth of module is a nice overview with goes in
to some more details about this, and I won’t repeat it here.