Tag Archives: software

How to execute an object file: Part 1

Post Syndicated from Ignat Korchagin original https://blog.cloudflare.com/how-to-execute-an-object-file-part-1/

Calling a simple function without linking

How to execute an object file: Part 1

When we write software using a high-level compiled programming language, there are usually a number of steps involved in transforming our source code into the final executable binary:

How to execute an object file: Part 1

First, our source files are compiled by a compiler translating the high-level programming language into machine code. The output of the compiler is a number of object files. If the project contains multiple source files, we usually get as many object files. The next step is the linker: since the code in different object files may reference each other, the linker is responsible for assembling all these object files into one big program and binding these references together. The output of the linker is usually our target executable, so only one file.

However, at this point, our executable might still be incomplete. These days, most executables on Linux are dynamically linked: the executable itself does not have all the code it needs to run a program. Instead it expects to "borrow" part of the code at runtime from shared libraries for some of its functionality:

How to execute an object file: Part 1

This process is called runtime linking: when our executable is being started, the operating system will invoke the dynamic loader, which should find all the needed libraries, copy/map their code into our target process address space, and resolve all the dependencies our code has on them.

One interesting thing to note about this overall process is that we get the executable machine code directly from step 1 (compiling the source code), but if any of the later steps fail, we still can’t execute our program. So, in this series of blog posts we will investigate if it is possible to execute machine code directly from object files skipping all the later steps.

Why would we want to execute an object file?

There may be many reasons. Perhaps we’re writing an open-source replacement for a proprietary Linux driver or an application, and want to compare if the behaviour of some code is the same. Or we have a piece of a rare, obscure program and we can’t link to it, because it was compiled with a rare, obscure compiler. Maybe we have a source file, but cannot create a full featured executable, because of the missing build time or runtime dependencies. Malware analysis, code from a different operating system etc – all these scenarios may put us in a position, where either linking is not possible or the runtime environment is not suitable.

A simple toy object file

For the purposes of this article, let’s create a simple toy object file, so we can use it in our experiments:


int add5(int num)
    return num + 5;

int add10(int num)
    return num + 10;

Our source file contains only 2 functions, add5 and add10, which adds 5 or 10 respectively to the only input parameter. It’s a small but fully functional piece of code, and we can easily compile it into an object file:

$ gcc -c obj.c 
$ ls
obj.c  obj.o

Loading an object file into the process memory

Now we will try to import the add5 and add10 functions from the object file and execute them. When we talk about executing an object file, we mean using an object file as some sort of a library. As we learned above, when we have an executable that utilises external shared libraries, the dynamic loader loads these libraries into the process address space for us. With object files, however, we have to do this manually, because ultimately we can’t execute machine code that doesn’t reside in the operating system’s RAM. So, to execute object files we still need some kind of a wrapper program:


#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>

static void load_obj(void)
    /* load obj.o into memory */

static void parse_obj(void)
    /* parse an object file and find add5 and add10 functions */

static void execute_funcs(void)
    /* execute add5 and add10 with some inputs */

int main(void)

    return 0;

Above is a self-contained object loader program with some functions as placeholders. We will be implementing these functions (and adding more) in the course of this post.

First, as we established already, we need to load our object file into the process address space. We could just read the whole file into a buffer, but that would not be very efficient. Real-world object files might be big, but as we will see later, we don’t need all of the object’s file contents. So it is better to mmap the file instead: this way the operating system will lazily read the parts from the file we need at the time we need them. Let’s implement the load_obj function:


/* for open(2), fstat(2) */
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

/* for close(2), fstat(2) */
#include <unistd.h>

/* for mmap(2) */
#include <sys/mman.h>

/* parsing ELF files */
#include <elf.h>

/* for errno */
#include <errno.h>

typedef union {
    const Elf64_Ehdr *hdr;
    const uint8_t *base;
} objhdr;

/* obj.o memory address */
static objhdr obj;

static void load_obj(void)
    struct stat sb;

    int fd = open("obj.o", O_RDONLY);
    if (fd <= 0) {
        perror("Cannot open obj.o");

    /* we need obj.o size for mmap(2) */
    if (fstat(fd, &sb)) {
        perror("Failed to get obj.o info");

    /* mmap obj.o into memory */
    obj.base = mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
    if (obj.base == MAP_FAILED) {
        perror("Maping obj.o failed");

If we don’t encounter any errors, after load_obj executes we should get the memory address, which points to the beginning of our obj.o in the obj global variable. It is worth noting we have created a special union type for the obj variable: we will be parsing obj.o later (and peeking ahead – object files are actually ELF files), so will be referring to the address both as Elf64_Ehdr (ELF header structure in C) and a byte pointer (parsing ELF files involves calculations of byte offsets from the beginning of the file).

A peek inside an object file

To use some code from an object file, we need to find it first. As I’ve leaked above, object files are actually ELF files (the same format as Linux executables and shared libraries) and luckily they’re easy to parse on Linux with the help of the standard elf.h header, which includes many useful definitions related to the ELF file structure. But we actually need to know what we’re looking for, so a high-level understanding of an ELF file is needed.

ELF segments and sections

Segments (also known as program headers) and sections are probably the main parts of an ELF file and usually a starting point of any ELF tutorial. However, there is often some confusion between the two. Different sections contain different types of ELF data: executable code (which we are most interested in in this post), constant data, global variables etc. Segments, on the other hand, do not contain any data themselves – they just describe to the operating system how to properly load sections into RAM for the executable to work correctly. Some tutorials say "a segment may include 0 or more sections", which is not entirely accurate: segments do not contain sections, rather they just indicate to the OS where in memory a particular section should be loaded and what is the access pattern for this memory (read, write or execute):

How to execute an object file: Part 1

Furthermore, object files do not contain any segments at all: an object file is not meant to be directly loaded by the OS. Instead, it is assumed it will be linked with some other code, so ELF segments are usually generated by the linker, not the compiler. We can check this by using the readelf command:

$ readelf --segments obj.o

There are no program headers in this file.

Object file sections

The same readelf command can be used to get all the sections from our object file:

$ readelf --sections obj.o
There are 11 section headers, starting at offset 0x268:

Section Headers:
  [Nr] Name              Type             Address           Offset
       Size              EntSize          Flags  Link  Info  Align
  [ 0]                   NULL             0000000000000000  00000000
       0000000000000000  0000000000000000           0     0     0
  [ 1] .text             PROGBITS         0000000000000000  00000040
       000000000000001e  0000000000000000  AX       0     0     1
  [ 2] .data             PROGBITS         0000000000000000  0000005e
       0000000000000000  0000000000000000  WA       0     0     1
  [ 3] .bss              NOBITS           0000000000000000  0000005e
       0000000000000000  0000000000000000  WA       0     0     1
  [ 4] .comment          PROGBITS         0000000000000000  0000005e
       000000000000001d  0000000000000001  MS       0     0     1
  [ 5] .note.GNU-stack   PROGBITS         0000000000000000  0000007b
       0000000000000000  0000000000000000           0     0     1
  [ 6] .eh_frame         PROGBITS         0000000000000000  00000080
       0000000000000058  0000000000000000   A       0     0     8
  [ 7] .rela.eh_frame    RELA             0000000000000000  000001e0
       0000000000000030  0000000000000018   I       8     6     8
  [ 8] .symtab           SYMTAB           0000000000000000  000000d8
       00000000000000f0  0000000000000018           9     8     8
  [ 9] .strtab           STRTAB           0000000000000000  000001c8
       0000000000000012  0000000000000000           0     0     1
  [10] .shstrtab         STRTAB           0000000000000000  00000210
       0000000000000054  0000000000000000           0     0     1
Key to Flags:
  W (write), A (alloc), X (execute), M (merge), S (strings), I (info),
  L (link order), O (extra OS processing required), G (group), T (TLS),
  C (compressed), x (unknown), o (OS specific), E (exclude),
  l (large), p (processor specific)

There are different tutorials online describing the most popular ELF sections in detail. Another great reference is the Linux manpages project. It is handy because it describes both sections’ purpose as well as C structure definitions from elf.h, which makes it a one-stop shop for parsing ELF files. However, for completeness, below is a short description of the most popular sections one may encounter in an ELF file:

  • .text: this section contains the executable code (the actual machine code, which was created by the compiler from our source code). This section is the primary area of interest for this post as it should contain the add5 and add10 functions we want to use.
  • .data and .bss: these sections contain global and static local variables. The difference is: .data has variables with an initial value (defined like int foo = 5;) and .bss just reserves space for variables with no initial value (defined like int bar;).
  • .rodata: this section contains constant data (mostly strings or byte arrays). For example, if we use a string literal in the code (for example, for printf or some error message), it will be stored here. Note, that .rodata is missing from the output above as we didn’t use any string literals or constant byte arrays in obj.c.
  • .symtab: this section contains information about the symbols in the object file: functions, global variables, constants etc. It may also contain information about external symbols the object file needs, like needed functions from the external libraries.
  • .strtab and .shstrtab: contain packed strings for the ELF file. Note, that these are not the strings we may define in our source code (those go to the .rodata section). These are the strings describing the names of other ELF structures, like symbols from .symtab or even section names from the table above. ELF binary format aims to make its structures compact and of a fixed size, so all strings are stored in one place and the respective data structures just reference them as an offset in either .shstrtab or .strtab sections instead of storing the full string locally.

The .symtab section

At this point, we know that the code we want to import and execute is located in the obj.o‘s .text section. But we have two functions, add5 and add10, remember? At this level the .text section is just a byte blob – how do we know where each of these functions is located? This is where the .symtab (the "symbol table") comes in handy. It is so important that it has its own dedicated parameter in readelf:

$ readelf --symbols obj.o

Symbol table '.symtab' contains 10 entries:
   Num:    Value          Size Type    Bind   Vis      Ndx Name
     0: 0000000000000000     0 NOTYPE  LOCAL  DEFAULT  UND
     1: 0000000000000000     0 FILE    LOCAL  DEFAULT  ABS obj.c
     2: 0000000000000000     0 SECTION LOCAL  DEFAULT    1
     3: 0000000000000000     0 SECTION LOCAL  DEFAULT    2
     4: 0000000000000000     0 SECTION LOCAL  DEFAULT    3
     5: 0000000000000000     0 SECTION LOCAL  DEFAULT    5
     6: 0000000000000000     0 SECTION LOCAL  DEFAULT    6
     7: 0000000000000000     0 SECTION LOCAL  DEFAULT    4
     8: 0000000000000000    15 FUNC    GLOBAL DEFAULT    1 add5
     9: 000000000000000f    15 FUNC    GLOBAL DEFAULT    1 add10

Let’s ignore the other entries for now and just focus on the last two lines, because they conveniently have add5 and add10 as their symbol names. And indeed, this is the info about our functions. Apart from the names, the symbol table provides us with some additional metadata:

  • The Ndx column tells us the index of the section, where the symbol is located. We can cross-check it with the section table above and confirm that indeed these functions are located in .text (section with the index 1).
  • Type being set to FUNC confirms that these are indeed functions.
  • Size tells us the size of each function, but this information is not very useful in our context. The same goes for Bind and Vis.
  • Probably the most useful piece of information is Value. The name is misleading, because it is actually an offset from the start of the containing section in this context. That is, the add5 function starts just from the beginning of .text and add10 is located from 15th byte and onwards.

So now we have all the pieces on how to parse an ELF file and find the functions we need.

Finding and executing a function from an object file

Given what we have learned so far, let’s define a plan on how to proceed to import and execute a function from an object file:

  1. Find the ELF sections table and .shstrtab section (we need .shstrtab later to lookup sections in the section table by name).
  2. Find the .symtab and .strtab sections (we need .strtab to lookup symbols by name in .symtab).
  3. Find the .text section and copy it into RAM with executable permissions.
  4. Find add5 and add10 function offsets from the .symtab.
  5. Execute add5 and add10 functions.

Let’s start by adding some more global variables and implementing the parse_obj function:



/* sections table */
static const Elf64_Shdr *sections;
static const char *shstrtab = NULL;

/* symbols table */
static const Elf64_Sym *symbols;
/* number of entries in the symbols table */
static int num_symbols;
static const char *strtab = NULL;


static void parse_obj(void)
    /* the sections table offset is encoded in the ELF header */
    sections = (const Elf64_Shdr *)(obj.base + obj.hdr->e_shoff);
    /* the index of `.shstrtab` in the sections table is encoded in the ELF header
     * so we can find it without actually using a name lookup
    shstrtab = (const char *)(obj.base + sections[obj.hdr->e_shstrndx].sh_offset);



Now that we have references to both the sections table and the .shstrtab section, we can lookup other sections by their name. Let’s create a helper function for that:



static const Elf64_Shdr *lookup_section(const char *name)
    size_t name_len = strlen(name);

    /* number of entries in the sections table is encoded in the ELF header */
    for (Elf64_Half i = 0; i < obj.hdr->e_shnum; i++) {
        /* sections table entry does not contain the string name of the section
         * instead, the `sh_name` parameter is an offset in the `.shstrtab`
         * section, which points to a string name
        const char *section_name = shstrtab + sections[i].sh_name;
        size_t section_name_len = strlen(section_name);

        if (name_len == section_name_len && !strcmp(name, section_name)) {
            /* we ignore sections with 0 size */
            if (sections[i].sh_size)
                return sections + i;

    return NULL;


Using our new helper function, we can now find the .symtab and .strtab sections:



static void parse_obj(void)

    /* find the `.symtab` entry in the sections table */
    const Elf64_Shdr *symtab_hdr = lookup_section(".symtab");
    if (!symtab_hdr) {
        fputs("Failed to find .symtab\n", stderr);

    /* the symbols table */
    symbols = (const Elf64_Sym *)(obj.base + symtab_hdr->sh_offset);
    /* number of entries in the symbols table = table size / entry size */
    num_symbols = symtab_hdr->sh_size / symtab_hdr->sh_entsize;

    const Elf64_Shdr *strtab_hdr = lookup_section(".strtab");
    if (!strtab_hdr) {
        fputs("Failed to find .strtab\n", stderr);

    strtab = (const char *)(obj.base + strtab_hdr->sh_offset);


Next, let’s focus on the .text section. We noted earlier in our plan that it is not enough to just locate the .text section in the object file, like we did with other sections. We would need to copy it over to a different location in RAM with executable permissions. There are several reasons for that, but these are the main ones:

  • Many CPU architectures either don’t allow execution of the machine code, which is unaligned in memory (4 kilobytes for x86 systems), or they execute it with a performance penalty. However, the .text section in an ELF file is not guaranteed to be positioned at a page aligned offset, because the on-disk version of the ELF file aims to be compact rather than convenient.
  • We may need to modify some bytes in the .text section to perform relocations (we don’t need to do it in this case, but will be dealing with relocations in future posts). If, for example, we forget to use the MAP_PRIVATE flag, when mapping the ELF file, our modifications may propagate to the underlying file and corrupt it.
  • Finally, different sections, which are needed at runtime, like .text, .data, .bss and .rodata, require different memory permission bits: the .text section memory needs to be both readable and executable, but not writable (it is considered a bad security practice to have memory both writable and executable). The .data and .bss sections need to be readable and writable to support global variables, but not executable. The .rodata section should be readonly, because its purpose is to hold constant data. To support this, each section must be allocated on a page boundary as we can only set memory permission bits on whole pages and not custom ranges. Therefore, we need to create new, page aligned memory ranges for these sections and copy the data there.

To create a page aligned copy of the .text section, first we actually need to know the page size. Many programs usually just hardcode the page size to 4096 (4 kilobytes), but we shouldn’t rely on that. While it’s accurate for most x86 systems, other CPU architectures, like arm64, might have a different page size. So hard coding a page size may make our program non-portable. Let’s find the page size and store it in another global variable:



static uint64_t page_size;

static inline uint64_t page_align(uint64_t n)
    return (n + (page_size - 1)) & ~(page_size - 1);


static void parse_obj(void)

    /* get system page size */
    page_size = sysconf(_SC_PAGESIZE);



Notice, we have also added a convenience function page_align, which will round up the passed in number to the next page aligned boundary. Next, back to the .text section. As a reminder, we need to:

  1. Find the .text section metadata in the sections table.
  2. Allocate a chunk of memory to hold the .text section copy.
  3. Actually copy the .text section to the newly allocated memory.
  4. Make the .text section executable, so we can later call functions from it.

Here is the implementation of the above steps:



/* runtime base address of the imported code */
static uint8_t *text_runtime_base;


static void parse_obj(void)

    /* find the `.text` entry in the sections table */
    const Elf64_Shdr *text_hdr = lookup_section(".text");
    if (!text_hdr) {
        fputs("Failed to find .text\n", stderr);

    /* allocate memory for `.text` copy rounding it up to whole pages */
    text_runtime_base = mmap(NULL, page_align(text_hdr->sh_size), PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
    if (text_runtime_base == MAP_FAILED) {
        perror("Failed to allocate memory for .text");

    /* copy the contents of `.text` section from the ELF file */
    memcpy(text_runtime_base, obj.base + text_hdr->sh_offset, text_hdr->sh_size);

    /* make the `.text` copy readonly and executable */
    if (mprotect(text_runtime_base, page_align(text_hdr->sh_size), PROT_READ | PROT_EXEC)) {
        perror("Failed to make .text executable");


Now we have all the pieces we need to locate the address of a function. Let’s write a helper for it:



static void *lookup_function(const char *name)
    size_t name_len = strlen(name);

    /* loop through all the symbols in the symbol table */
    for (int i = 0; i < num_symbols; i++) {
        /* consider only function symbols */
        if (ELF64_ST_TYPE(symbols[i].st_info) == STT_FUNC) {
            /* symbol table entry does not contain the string name of the symbol
             * instead, the `st_name` parameter is an offset in the `.strtab`
             * section, which points to a string name
            const char *function_name = strtab + symbols[i].st_name;
            size_t function_name_len = strlen(function_name);

            if (name_len == function_name_len && !strcmp(name, function_name)) {
                /* st_value is an offset in bytes of the function from the
                 * beginning of the `.text` section
                return text_runtime_base + symbols[i].st_value;

    return NULL;


And finally we can implement the execute_funcs function to import and execute code from an object file:



static void execute_funcs(void)
    /* pointers to imported add5 and add10 functions */
    int (*add5)(int);
    int (*add10)(int);

    add5 = lookup_function("add5");
    if (!add5) {
        fputs("Failed to find add5 function\n", stderr);

    puts("Executing add5...");
    printf("add5(%d) = %d\n", 42, add5(42));

    add10 = lookup_function("add10");
    if (!add10) {
        fputs("Failed to find add10 function\n", stderr);

    puts("Executing add10...");
    printf("add10(%d) = %d\n", 42, add10(42));


Let’s compile our loader and make sure it works as expected:

$ gcc -o loader loader.c 
$ ./loader 
Executing add5...
add5(42) = 47
Executing add10...
add10(42) = 52

Voila! We have successfully imported code from obj.o and executed it. Of course, the example above is simplified: the code in the object file is self-contained, does not reference any global variables or constants, and does not have any external dependencies. In future posts we will look into more complex code and how to handle such cases.

Security considerations

Processing external inputs, like parsing an ELF file from the disk above, should be handled with care. The code from loader.c omits a lot of bounds checking and additional ELF integrity checks, when parsing the object file. The code is simplified for the purposes of this post, but most likely not production ready, as it can probably be exploited by specifically crafted malicious inputs. Use it only for educational purposes!

The complete source code from this post can be found here.

Managed Entitlements in AWS License Manager Streamlines License Tracking and Distribution for Customers and ISVs

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/managed-entitlements-for-aws-license-manager-streamlines-license-management-for-customers-and-isvs/

AWS License Manager is a service that helps you easily manage software licenses from vendors such as Microsoft, SAP, Oracle, and IBM across your Amazon Web Services (AWS) and on-premises environments. You can define rules based on your licensing agreements to prevent license violations, such as using more licenses than are available. You can set the rules to help prevent licensing violations or notify you of breaches. AWS License Manager also offers automated discovery of bring your own licenses (BYOL) usage that keeps you informed of all software installations and uninstallations across your environment and alerts you of licensing violations.

License Manager can manage licenses purchased in AWS Marketplace, a curated digital catalog where you can easily find, purchase, deploy, and manage third-party software, data, and services to build solutions and run your business. Marketplace lists thousands of software listings from independent software vendors (ISVs) in popular categories such as security, networking, storage, machine learning, business intelligence, database, and DevOps.

Managed entitlements for AWS License Manager
Starting today, you can use managed entitlements, a new feature of AWS License Manager that lets you distribute licenses across your AWS Organizations, automate software deployments quickly and track licenses – all from a single, central account. Previously, each of your users would have to independently accept licensing terms and subscribe through their own individual AWS accounts. As your business grows and scales, this becomes increasingly inefficient.

Customers can use managed entitlements to manage more than 8,000 listings available for purchase from more than 1600 vendors in the AWS Marketplace. Today, AWS License Manager automates license entitlement distribution for Amazon Machine Image, Containers and Machine Learning products purchased in the Marketplace with a variety of solutions.

How It Works
Managed entitlements provides built-in controls that allow only authorized users and workloads to consume a license within vendor-defined limits. This new license management mechanism also eliminates the need for ISVs to maintain their own licensing systems and conduct costly audits.


Each time a customer purchases licenses from AWS Marketplace or a supported ISV, the license is activated based on AWS IAM credentials, and the details are registered to License Manager.

list of granted license

Administrators distribute licenses to AWS accounts. They can manage a list of grants for each license.

list of grants

Benefits for ISVs
AWS License Manager managed entitlements provides several benefits to ISVs to simplify the automatic license creation and distribution process as part of their transactional workflow. License entitlements can be distributed to end users with and without AWS accounts. Managed entitlements streamlines upgrades and renewals by removing expensive license audits and provides customers with a self-service tracking tool with built-in license tracking capabilities. There are no fees for this feature.

Managed entitlements provides the ability to distribute licenses to end users who do not have AWS accounts. In conjunction with the AWS License Manager, ISVs create a unique long-term token to identify the customer. The token is generated and shared with the customer. When the software is launched, the customer enters the token to activate the license. The software exchanges the long-term customer token for a short-term token that is passed to the API and the setting of the license is completed. For on-premises workloads that are not connected to the Internet, ISVs can generate a host-specific license file that customers can use to run the software on that host.

Now Available
This new enhancement to AWS License Manager is available today for US East (N. Virginia), US West (Oregon), and Europe (Ireland) with other AWS Regions coming soon.

Licenses purchased on AWS Marketplace are automatically created in AWS License Manager and no special steps are required to use managed entitlements. For more details about the new feature, see the managed entitlement pages on AWS Marketplace, and the documentation. For ISVs to use this new feature, please visit our getting started guide.

Get started with AWS License Manager and the new managed entitlements feature today.

– Kame

За блогърстването и още нещо

Post Syndicated from Yovko Lambrev original https://yovko.net/za-blogarstvaneto/

За блогърстването и още нещо

Този текст е провокиран от публикация на Иван, който обзет от носталгия по доброто, старо блогърстване ме сръчка да споделя и аз някакви мисли по темата. А вероятно и да имам повод напиша нещо тук. Oт последният ми пост са минали три месеца. Вероятно до края на тази особена 2020 година няма да напиша друг, но пък бих използвал време от почивните дни около Коледа да реорганизирам сайта си и своето online присъствие. За пореден път. Иначе казано, щом виждам смисъл да го правя, значи не съм се отказал окончателно.

Макар и да съзнавам напълно, че няма как да е като преди.

Интересно съвпадение е, че някъде точно по това време миналата година убих най-накрая Facebook профила си. И не просто го замразих – изтрих го. С искрена и неприкрита ненавист към тази зловеща платформа.

Разбира се, това не значи, че съм се разписал повече тук.

Блогването имаше повече стойност, когато Google Reader беше жив и беше платформа за комуникация и диалог – кой какво чете, кой какво споделя от своите прочетени неща. Днес това все още е възможно – например в Inoreader. Но вече е много по-трудно да събереш всички на едно място с подобна цел (ако не си от hi-tech гигантите), а друга е темата дали това изобщо е добра идея. Един дистрибутиран RSS-четец с подобна функционалност ще е най-доброто възможно решение.

Уви, за щастие RSS е още жив, но натискът да бъде убит този чуден протокол за споделяне на съдържание е огромен. От една страна големият враг са всички, които искат да обвързват потребители, данни и съдържание със себе си, а от друга немарливостта на web-разработчиците и дигиталните маркетолози, които проповядват, че RSS е мъртъв и не си струва усилията. Днес все по-често сайтовете са с нарочно премахнат или спрян RSS.

Без такива неща, блоговете в момента са като мегафон, на който са извадени батериите – островчета, които продължават да носят смисъл и значение, но като в легендата за хан Кубрат – няма я силата на съчките събрани в сноп.

Всъщност вината ни е обща. Защото се подхлъзваме като малки деца на шаренко по всяка заигралка в мрежата, без да оценяваме плюсовете и минусите, още по-малко щетите, които би могла да нанесе. Интернет вече е платформа с по-голямо социално, отколкото технологично значение – оценка на въздействието би трябвало да бъде задължителна стъпка в тестването на всяка нова идея или проект в мрежата. И то не толкова от този, който я пуска, защото обикновено авторът е заслепен от мечти и жажда за слава (а често и неприкрита алчност), а от първите, че и последващите потребители.

Блогърите предадохме блогването, защото „глей к’во е яко да се пра’иш на интересен в 140 знака в твитър“ или „как ши стана мега-хипер-секси инфлуенцър с facebook и instagram в три куци стъпки“. Нищо, че социалките може и да те усилват (до едно време), но после само ти крадат трафика. И се превърщат в посредник със съмнителна добавена стойност, но пък взимащ задължително своето си.

Това всъщност беше само върхът на айсберга, защото от самото начало технооптимизмът беше в повече. В много посоки и отношения. И ако това в романтичните времена на съзиданието бе оправдано, и дори полезно – сега ни е нужен технореализъм и критично отношение за поправим счупеното.

Интернет днес не е това, което трябваше да бъде. Гравитацията на гигантите засмука всичко. И продължава. И ако не искаме да се озовем в черна дупка са нужни общи усилия за децентрализирането му. Това няма да е лесно, защото този път създателите няма да имат много съюзници от страна на бизнеса. Предстои трудна война с гигантите и влиянието им. Единственият могъщ съюзник сме си ние хората, ако успеем да си обясним помежду си защо е нужно да се съпротивляваме.

Засега не успяваме.

P.S. Т.е. псст, подхвърлям топката към Ясен и Мария.

Last phase of the desktop wars?

Post Syndicated from Armed and Dangerous original http://esr.ibiblio.org/?p=8764

The two most intriguing developments in the recent evolution of the Microsoft Windows operating system are Windows System for Linux (WSL) and the porting of their Microsoft Edge browser to Ubuntu.

For those of you not keeping up, WSL allows unmodified Linux binaries to run under Windows 10. No emulation, no shim layer, they just load and go.

Microsoft developers are now landing features in the Linux kernel to improve WSL. And that points in a fascinating technical direction. To understand why, we need to notice how Microsoft’s revenue stream has changed since the launch of its cloud service in 2010.

Ten years later, Azure makes Microsoft most of its money. The Windows monopoly has become a sideshow, with sales of conventional desktop PCs (the only market it dominates) declining. Accordingly, the return on investment of spending on Windows development is falling. As PC volume sales continue to fall off , it’s inevitably going to stop being a profit center and turn into a drag on the business.

Looked at from the point of view of cold-blooded profit maximization, this means continuing Windows development is a thing Microsoft would prefer not to be doing. Instead, they’d do better putting more capital investment into Azure – which is widely rumored to be running more Linux instances than Windows these days.

Our third ingredient is Proton. Proton is the emulation layer that allows Windows games distributed on Steam to run over Linux. It’s not perfect yet, but it’s getting close. I myself use it to play World of Warships on the Great Beast.

The thing about games is that they are the most demanding possible stress test for a Windows emulation layer, much more so than business software. We may already be at the point where Proton-like technology is entirely good enough to run Windows business software over Linux. If not, we will be soon.

So, you’re a Microsoft corporate strategist. What’s the profit-maximizing path forward given all these factors?

It’s this: Microsoft Windows <em>becomes</em> a Proton-like emulation layer over a Linux kernel, with the layer getting thinner over time as more of the support lands in the mainline kernel sources. The economic motive is that Microsoft sheds an ever-larger fraction of its development costs as less and less has to be done in-house.

If you think this is fantasy, think again. The best evidence that it’s already the plan is that Microsoft has already ported Edge to run under Linux. There is only one way that makes any sense, and that is as a trial run for freeing the rest of the Windows utility suite from depending on any emulation layer.

So, the end state this all points at is: New Windows is mostly a Linux kernel, there’s an old-Windows emulation over it, but Edge and the rest of the Windows user-land utilities <em>don’t use the emulation.</em> The emulation layer is there for games and other legacy third-party software.

Economic pressure will be on Microsoft to deprecate the emulation layer. Partly because it’s entirely a cost center. Partly because they want to reduce the complexity cost of running Azure. Every increment of Windows/Linux convergence helps with that – reduces administration and the expected volume of support traffic.

Eventually, Microsoft announces upcoming end-of-life on the Windows emulation. The OS itself , and its userland tools, has for some time already been Linux underneath a carefully preserved old-Windows UI. Third-party software providers stop shipping Windows binaries in favor of ELF binaries with a pure Linux API…

…and Linux finally wins the desktop wars, not by displacing Windows but by co-opting it. Perhaps this is always how it had to be.


Post Syndicated from Yovko Lambrev original https://yovko.net/hey/

Идната година електронната поща ще навърши половин век. И до днес тя е от фундаменталните мрежови услуги, без които комуникацията в Интернет нямаше да бъде същата.

През последните 25 години, през които аз лично използвам електронна поща, се нагледах на какви ли не „революционни“ идеи за развитие на имейла. Куцо и сакато се напъваше да поправя технологията. Какви ли не маркетингови и други усилия се потрошиха да обясняват на света колко счупена и неадекватна била електронната поща и как именно поредното ново изобретение щяло да бъде убиецът на имейла.

Таратанци! Имейлът си е жив и здрав. И продължава да надживява мимолетните изпръцквания на всичките му конкуренти дотук.

Факт… имейлът не е съвършен. На този свят живеят твърде много амбициозни човечета, чийто интелект не стига за много повече от това да се пънат да злоупотребяват с всевъзможни неща. Част от тях пълнят пощенските ни кутии със СПАМ (и виртуалните, и физическите ни пощи). Други едни гадинки прекаляват с маркетинговите си стремежи да ни дебнат и профилират и ни пускат писъмца с „невидими“ вградени пиксели и всевъзможни подобни подслушвачки, за да знаят дали и кога сме отворили безценните им скучни и шаблонни бюлетини и дали са ни прилъгали да щракнем някъде из тях преди да ги запратим в кошчето. Трети тип досадници често са скъпите ни колеги, които копират половината фирма за щяло или не щяло. Уви, много често така е наредил шефът или го изисква корпоративната „култура“.

Да, четенето (и отговарянето) на имейли се е превърнало в тегоба заради твърде много спам, маркетинг и глупости, които циркулират насам-натам (много често за всеки случай). И защото почти никой не полага усилия да спазва някакъв имейл етикет.

Аз съм абсолютен фен на имейла, в есенциалния му вид. Не споделям идеята, че той е счупен или неудобен. Ако човек положи някакви усилия да организира входящата си поща с малко филтри и поддиректории и си създаде навик да чете поща не повече от един-два пъти дневно, животът става малко по-светъл. И да – спрял съм всички автоматични уведомления за нови писма по смартфони, декстопи, таблети и др. Електронната поща е за асинхронна комуникация, а именно да чета и пиша, когато мога и искам. Това не е чат!

Затова изцвилих от удоволствие, когато научих, че Basecamp работят по своя email услуга, с идеята да се преборят с част от досадните неща около имейла. И веднага се записах на опашката за желаещи да я пробват. Вече два месеца я използвам активно и смея да твърдя, че за първи път е постигнато нещо смислено по темата да направим имейла по-добър. Нещото се казва Hey.com и е платена персонална услуга за личен email. Планира се в бъдеще да се предлага и бизнес опция със собствен домейн, но засега може да се получи само личен адрес от типа [email protected]

Няма да скрия, че някои неща в началото не ми се понравиха съвсем. И най-вече това, че няма как да се ползва стандартен клиент за поща като Thunderbird или Apple Mail, защото услугата е недостъпна по IMAPS/SMTPS. Но смисълът да се ползват само и единствено специалните клиенти на Hey е в напълно различния подход към електронната поща и специфичните за услугата функционалности, които няма как да се ползват през стандартен mail клиент. Така че цената на този компромис си струва.

Hey не прилича на нищо друго, което досега съм ползвал за поща. Има нужда от малко свикване, но само след ден или два аз лично не искам да виждам и чувам за никаква друга пощенска услуга или mail клиент.

hey.com - Imbox

Философията е следната – аз имам пълен контрол върху това каква поща искам да получвам. Всичката входяща поща минава първоначално през едно нещо, наречено screener, който спира всяко писмо, което пристига за първи път от адрес, който досега не ми е писал. Ако там попадне писмо от някой досадник или спамър аз просто отбелязвам, че не желая да чувам за него и… край. Никога повече няма да чуя и видя писмо от него. Е, освен ако не реши да ми пише от друг адрес, но така пак ще попадне в скрийнъра и пак мога да го маркирам като нежелан кореспондент.

Всъщност такова писмо не се връща обратно, изпращачът му не получава никаква обратна връзка какво се случва, но аз просто никога няма да го видя. Кеф! 🙂 Перфектно оръжие срещу досадници!

hey.com - The Screener

На теория това работи и срещу спамъри, но обикновено голяма част от спама се филтрира предварително и дори не достига до скрийнъра, макар че от време на време се случва. Случва се също и някое полезно писмо да се маркира като SPAM, но доста рядко. В крайна сметка, никой антиспам филтър не е съвършен, особено когато някои писма наистина приличат на СПАМ заради платформата, през която са изпратени, или поради немарливостта на изпращача им.

Иначе казано, поне еднократно, аз трябва да позволя на всеки, който би искал да ми пише, да може да го прави. И това се случва при първото получено от него писмо. Демек нямате втори шанс да направите първоначално добро впечатление. 🙂

С времето мога да размисля и да заглуша такива, които съм допуснал да ми пишат, или обратно – да позволя на такива, които съм бил заглушил.

Писмата, които съм се съгласил да получвам, се разпределят най-общо в три категории – едната се нарича Paper Trail и обикновено е за писма, които са за някакви чисто информативни цели, без да изискват отговор. Най-често потвърждения за платени сметки, за сменени пароли и други такива неща. Другата е наречена The Feed и е предназначена за бюлетини, маркетингови послания, неща за четене, когато имам време за тях. И третата, която всъщност е основната, се нарича Imbox (от important box), където са всички тези писма, които не са спам, не са само за информация, не са маркетингови или други читанки. Тези писма обикновено изискват внимание, а най-често и отговор. Те са истинската ми поща и важните неща. Та, цялата идея е в Imbox-а ми да достигат само такива писма.

За всеки кореспондент мога да определям дали писмата му да остават в Imbox, или да ходят в The Feed или Paper Trail. На всеки email мога да реша да отговоря веднага или да си отбележа някои от тях за по-късно (reply later). Да маркирам някаква важна информация в тях (clip) или да ги заделя настрани (aside) по някаква своя причина. Разбира се, мога и да си създавам свои етикети и да си отбелязвам различни писма с тях по някакви лични критерии и съображения.

Идеята за inbox zero тук я няма. Няма го и безполезното архивиране, което реално е преместване на писма. Важната ви поща си се трупа в Imbox – етикетирана или не (по ваше желание). Всичко, което ви е нужно, е една добра търсачка. Е, имате я. Както и пространство от цели 100GB.

Има и други глезотии, които могат да се прегледат тук. А може и да се направи тестов акаунт за вкусване на услугата и интерфейса ѝ. Всичко може да се използва директно през браузър, но има приложения за мобилни телефони, както и за десктоп операционни системи.

И още нещо, което просто не мога да не отбележа, защото ме спечели окончателно и ми доставя перверзно удоволствие. Всички вградени в писмата пиксели за проследяване, парчета код, маркетингови шитни и въобще всякакви такива номера биват обезвреждани автоматично, а писмата – маркирани, че са съдържали такива гадинки. Няма не искам, няма недей! Kill ’em all! Плачете, маркетинг гурута! Ронете кървави сълзи! Не работят вашите hubspot буби, salesforce бози, facebook пиксели, mailchimp вендузи и прочие маркетингови кърлежи. Сега в hey.com, а скоро и по други интернет ширини! 🙂

С две думи казано, Hey е за всички, които ценят времето си. За тези, за които е важно имейлът да е полезен инструмент, а не пиявица на ценно време. Hey ще се хареса на всички, които ползват интензивно електронна поща като основна комуникация, защото ще внесе спокойствие в преживяването им с личния им имейл. Такова, че понякога се чудя дали наистина нямам нова поща и дали нещо не се е счупило.

Hey не е за тези, които търсят к’ва да е ефтинка пощичка в abv.bg, gmail.com и прочие. Цената е $99 на година. Няма отстъпки, по-базови или по-премиални планове, нито някакви сложни опции. Услугата е само една и толкова. Струва си обаче всеки цент. А адресът, който сте си взели, си остава завинаги ваш (пренасочва се), дори да се откажете след време.

P.S. Хей! Нямам никакви взаимоотношения с Basecamp или екипа им. Просто съм фен на продуктите, но и на философията им за бизнеса и живота. Платил съм за услугата с лични средства и по собствено желание.

Documentation as knowledge capture

Post Syndicated from esr original http://esr.ibiblio.org/?p=8741

Maybe you’re one of the tiny minority of programmers that, like me, already enjoys writing documentation and works hard at doing it right. If so,the rest of this essay is not for you and you can skip it.

Otherwise, you might want to re-read (or at least re-skim) Ground-Truth Documents before continuing. Because ground-truth documents are a special case of a more general reason why you might want to try to change your mindset about documentation.

In that earlier essay I used the term “knowledge capture” in passing. This is a term of art from AI; it refers to the process of extracting domain knowledge from the heads of human experts into a form that can be expressed as an algorithm executable by the literalistic logic of a computer.

What I invite you to think about now is how writing documentation for software you are working on can save you pain and effort by (a) capturing knowledge you have but don’t know you have, and (b) eliciting knowledge that you have not yet developed.

Humans, including me and you, are sloppy and analogical thinkers who tend to solve problems by pattern-matching against noisy data first and checking our intuitions with logic after the fact (if we actually get that far). There’s no point in protesting that it shouldn’t be that way, that we should use rigorous logic all the way down, because our brains simply aren’t wired for that. Evolved cognition is a kludge – more properly, multiple stacks of kludges – developed under selection to be just barely adequate at coping.

This kludginess is revealed by, for example, optical illusions. And by the famous 7±2 result about the very limited sized of the human working set. And the various well-documented ways that human beings are extremely bad at statistical reasoning. And in many other ways…

When you do work that is as demanding of rigor as software engineering, one of your central challenges is hacking around the limitations of your own brain. Sometimes this develops in very obvious ways; the increasing systematization of testing during development during the last couple of decades, for example.

Other brain hacks are more subtle. Which is why I am here to suggest that you try to stop thinking of documentation as a chore you do for others, and instead think of it as a way to explore your problem space. and the space in your head around your intuitions about the problem, so you can shine light into the murkier corners of both. Writing documentation can function as valuable knowledge capture about your problem domain even when you are the only expert about what you are trying to do.

This is why my projects often have a file called “designer’s notes” or “hacking guide”. Early in a project these may just be random jottings that are an aid to my own memory about why things are the way they are. They tend to develop into a technical briefing about the code internals for future contributors. This is a good habit to form if you want to have future contributors!

But even though the developed version of a “designer’s notes” looks other-directed, it’s really a thing I do to reduce my own friction costs. And not just in the communication-to-my-future-self way either. Yes, it’s tremendously valuable to have a document that, months or years after I wrote it, reminds me of my assumptions when I have half-forgotten them. And yes, a “designer’s notes” file is good practice for that reason alone. But its utility does not even start there, let alone end there.

Earlier, I wrote of (a) capturing knowledge you have but don’t know you have, and (b) eliciting knowledge that you have not yet developed. The process of writing your designer’s notes can be powerful and catalytic that way even if they’re never communicated. The thing you have to do in your brain to narratize your thoughts so they can be written down is itself an exploratory tool.

As with “designer’s notes” so with every other form of documentation from the one-line code comment to a user-oriented HOWTO. When you achieve right mindset about these they are no longer burdens; instead they become an integral part of your creative process, enabling you to design better and write better code with less total effort.

I understand that to a lot of programmers who now experience writing prose as difficult work this might seem like impossible advice. But I think there is a way from where you are to right mindset. That way is to let go of the desire for perfection in your prose, at least early on. Sentence fragments are OK. Misspellings are OK. Anything you write that explores the space is OK, no matter how barbarous it would look to your third-grade grammar teacher or the language pedants out there (including me).

It is more important to do the discovery process implied by writing down your ideas than it is for the result to look polished. If you hold on to that thought, get in the habit of this kind of knowledge capture, and start benefiting from it, then you might find that over time your standards rise and it gets easier to put more effort into polishing.

If that happens, sure; let it happen – but it’s not strictly necessary. The only thing that is necessary is that you occasionally police what you’ve recorded so it doesn’t drift into reporting something the software no longer does. That sort of thing is a land-mine for anyone else who might read your notes and very bad form.

Other than that, though, the way to get to where you do the documentation-as-knowledge-capture thing well is by starting small; allow a low bar for polish and completeness and grow the capability organically. You will know you have won when it starts being fun.

A user story about user stories

Post Syndicated from esr original http://esr.ibiblio.org/?p=8720

The way I learned to use the term “user story”, back in the late 1990s at the beginnings of what is now called “agile programming”, was to describe a kind of roleplaying exercise in which you imagine a person and the person’s use case as a way of getting an outside perspective on the design, the documentation, and especially the UI of something you’re writing.

For example:

Meet Joe. He works for Randomcorp, who has a nasty huge old Subversion repository they want him to convert to Git. Joe is a recent grad who got thrown at the problem because he’s new on the job and his manager figures this is a good performance test in a place where the damage will be easily contained if he screws up. Joe himself doesn’t know this, but his teammates have figured it out.

Joe is smart and ambitious but has little experience with large projects yet. He knows there’s an open-source culture out there, but isn’t part of it – he’s thought about running Linux at home because the more senior geeks around him all seem to do that, but hasn’t found a good specific reason to jump yet. In truth most of what he does with his home machine is play games. He likes “Elite: Dangerous” and the Bioshock series.

Joe knows Git pretty well, mainly through the Tortoise GUI under Windows; he learned it in school. He has only used Subversion just enough to know basic commands. He found reposurgeon by doing web searches. Joe is fairly sure reposurgeon can do the job he needs and has told his boss this, but he has no idea where to start.

What does Joe’s discovery process looks like? Read the first two chapters of “Repository Editing with Reposurgeon” using Joe’s eyes. Is he going to hit this wall of text and bounce? If so, what could be done to make it more accessible? Is there some way to write a FAQ that would help him? If so, can we start listing the questions in the FAQ?

Joe has used gdb a little as part of a class assignment but has not otherwise seen programs with a CLI resembling reposurgeon’s. When he runs it, what is he likely to try to do first to get oriented? Is that going to help him feel like he knows what’s going on, or confuse him?

“Repository Editing…” says he ought to use repotool to set up a Makefile and stub scripts for the standard conversion workflow. What will Joe’s eyes tell him when he looks at the generated Makefile? What parts are likeliest to confuse him? What could be done to fix that?

Joe, my fictional character, is about as little like me as as is plausible at a programming shop in 2020, and that’s the point. If I ask abstractly “What can I do to improve reposurgeon’s UI?”, it is likely I will just end up spinning my wheels; if, instead, I ask “What does Joe see when he looks at this?” I am more likely to get a useful answer.

It works even better if, even having learned what you can from your imaginary Joe, you make up other characters that are different from you and as different from each other as possible. For example, meet Jane the system administrator, who got stuck with the conversion job because her boss thinks of version-control systems as an administrative detail and doesn’t want to spend programmer time on it. What do her eyes see?

In fact, the technique is so powerful that I got an idea while writing this example. Maybe in reposurgeon’s interactive mode it should issue a first like that says “Interactive help is available; type ‘help’ for a topic menu.”

However. If you search the web for “design by user story”, what you are likely to find doesn’t resemble my previous description at all. Mostly, now twenty years after the beginnings of “agile programming”, you’ll see formulaic stuff equating “user story” with a one-sentence soundbite of the form “As an X, I want to do Y”. This will be surrounded by a lot of talk about processes and scrum masters and scribbling things on index cards.

There is so much gone wrong with this it is hard to even know where to begin. Let’s start with the fact that one of the original agile slogans was “Individuals and Interactions Over Processes and Tools”. That slogan could be read in a number of different ways, but under none of them at all does it make sense to abandon a method for extended insight into the reactions of your likely users for a one-sentence parody of the method that is surrounded and hemmed in by bureaucratic process-gabble.

This is embedded in a larger story about how “agile” went wrong. The composers of the Agile Manifesto intended it to be a liberating force, a more humane and effective way to organize software development work that would connect developers to their users to the benefit of both. A few of the ideas that came out of it were positive and important – besides design by user story, test-centric development and refactoring leap to mind,

Sad to say, though, the way “user stories” became trivialized in most versions of agile is all too representative of what it has often become under the influence of two corrupting forces. One is fad-chasers looking to make a buck on it, selling it like snake oil to managers forever perplexed by low productivity, high defect rates, and inability to make deadlines. Another is the managers’ own willingness to sacrifice productivity gains for the illusion of process control.

It may be too late to save “agile” in general from becoming a deadening parody of what it was originally intended to be, but it’s not too late to save design by user story. To do this, we need to bear down on some points that its inventors and popularizers were never publicly clear about, possibly because they themselves didn’t entirely understand what they had found.

Point one is how and why it works. Design by user story is a trick you play on your social-monkey brain that uses its fondness for narrative and characters to get you to step out of your own shoes.

Yes, sure, there’s a philosophical argument that stepping out of your shoes in this sense is impossible; Joe, being your fiction, is limited by what you can imagine. Nevertheless, this brain hack actually works. Eppure, si muove; you can generate insights with it that you wouldn’t have had otherwise.

Point two is that design by user story works regardless of the rest of your methodology. You don’t have to buy any of the assumptions or jargon or processes that usually fly in formation with it to get use out of it.

Point three is that design by user story is not a technique for generating code, it’ s a technique for changing your mind. If you approach it in an overly narrow and instrumental way, you won’t imagine apparently irrelevant details like what kinds of video games Joe likes. But you should do that sort of thing; the brain hack works in exact proportion to how much imaginative life you give your characters.

(Which in particular, is why “As an X, I want to do Y” is such a sadly reductive parody. This formula is designed to stereotype the process, but stereotyping is the enemy of novelty, and novelty is exactly what you want to generate.)

A few of my readers might have the right kind of experience for this to sound familiar. The mental process is similar to what in theater and cinema is called “method acting.” The goal is also similar – to generate situational responses that are outside your normal habits.

Once again: you have to get past tools and practices to discover that the important part of software design – the most difficult and worthwhile part – is mindset. In this case, and temporarily, someone else’s.

Looking for C-to-anything transpilers

Post Syndicated from esr original http://esr.ibiblio.org/?p=8705

I’m looking for languages that have three properties:

(1) Must have weak memory safety. The language is permitted to crash on an out -of-bounds array reference or null pointer, but may not corrupt or overwrite memory as a result.

(2) Must have a transpiler from C that produces human-readable, maintainable code that preserves (non-perverse) comments. The transpiler is allowed to not do a 100% job, but it must be the case that (a) the parts it does translate are correct, and (b) the amount of hand-fixup required to get to complete translation is small.

(3) Must not be Go, Rust, Ada, or Nim. I already know about these languages and their transpilers.

Приложенията за проследяване на контакти са лоша идея

Post Syndicated from Yovko Lambrev original https://yovko.net/contact-tracing-apps-bad-idea/

Напоследък свикнахме, че едва ли не за всеки проблем около нас може да бъде намерено „високотехнологично“ софтуерно решение. Не искам да влизам в непродуктивни дискусии дали ИТ бизнесът прекрачва границите на почтеността, промотирайки се като всемогъщ. Но е факт, че малцина са тези, които признават глупостите, сътворени от същата тази иначе перспективна индустрия. Няма да крия, че като insider, личното ми мнение е, че отдавна са преминати доста граници.

Поредната много опасна идея със съмнителни ползи, но пък безспорни рискове, е проследяването на контактите между хората чрез приложения на „умните“ им телефони. Още по-опасно е, че зад нея застанаха и двете гигантски компании Apple и Google, които в момента притежават 99,5% от пазара на мобилни операционни системи.

Идеята дойде в отговор на очакванията (а не е изключен и политически натиск), че с помощта на технологично проследяване на контактите между хората може да се контролира по-бързо или по-ефективно разпространението на новия коронавирус сред тях. На първо четене една изключително хуманна идея. Проблемът е, че технооптимистите обикновено създават и тестват идеите си в контролирана лабораторна среда. Но когато се окаже, че пусната в реалния живот, същата идея носи повече вреди, отколкото ползи, те свиват рамене и отказват да носят отговорност. Пикльото Зукърбърг е класически пример.

Ползвателите на Android са си по презумпция прецакани, защото благодарение на телефоните си така или иначе отдавна са се превърнали в донори на данни за Google. Ползвателите на iOS досега имаха някакви основания да допускат, че може би водят една идея по-защитено съществуване на уютния остров на Apple. Но след няколко дни (когато излезе iOS 13.5) и те ще трябва да се разделят с тази илюзия. При това „ябълковата“ компания ще го направи по най-свинския възможен начин – забавяйки критична поправка в сигурността, за да я комбинира с новото API за проследяване на контактите в едно обновление. Иначе казано, Apple оставя потребителите си без особен избор, без чиста корекция на пробитата версия 13.4.1.

Какво възнамеряват да направят Apple и Google, при това заедно и съгласувано?

Всъщност те няма да пускат приложения за проследяване на контактите, както масово неправилно се твърди. Това, което ще добавят към операционните си системи, е т.нар. приложен програмен интерфейс (API), който ще може да бъде използван от разработчиците за създаване на приложения. Програмните интерфейси обслужват различни цели – чрез тях програмистите могат да реализират едни или други функционалности в своите приложения, използвайки възможностите на телефона, операционната система или външни услуги. Например могат да „питат“ GPS-а на телефона за географската му локация, да я покажат на картата, да я добавят към снимка, която камерата прави, и други такива неща.

Новият програмен интерфейс е замислен да прави следното: Нали сте обръщали внимание как когато пуснете bluetooth-а на своя телефон (за да закачите слушалките си например), обикновено виждате една купчина други устройства, които са близо до вас в момента? Обикновено bluetooth работи едва до няколко метра. Напоследък той използва пренебрежимо малко енергия от батерията и за удобство е пуснат по подразбиране на повечето съвременни телефони. Е, това стои в основата на идеята да проследяваме разпространението на вируса, причиняващ COVID-19.

Докато се разхождаме с телефоните си, те ще „подслушват“ кои други телефони около нас са в достатъчна близост за достатъчно дълго време и ще си обменят анонимни (или анонимизирани) идентификатори. Тук има две важни думички – едната е анонимни, другата е идентификатори. Чувствате ли иронията в словосъчетанието анонимни идентификатори? Но нека засега останем хладнокръвни и добронамерени към тази идея за спасяване на човечеството с апове.

Та, значи, вие си ходите по улицата или на работа, или другаде, срещате се с някакви хора, и когато вашето телефонче „чуе“ наблизо bluetooth-а на друг телефон, той си записва този факт. Важно е да уточним, че ни обещават, че няма да се записва локацията, телефонния номер или „айфона на Пешо Михайлов“, а вместо това ще запамети само някакво (да кажем) число, което има за цел да съответства на въпросния телефон, и така едновременно имаме следа, но сме анонимизирали притежателя. Телефоните на всички останали около нас, също си водят бележки, че сме се срещали.

Наричаме тези „числа“ анонимизирани идентификатори, защото ако ги погледнем разписани, няма как да разберем кое на чий телефон съответства. Списъците с такива числа се пазят само в телефона, който ги е събрал. Не се изпращат никъде (засега). Това поредно обещание е важен елемент от идеята, което има за цел да ни успокои, че всъщност са взети мерки цялото това нещо да не изглежда толкова страшно, колкото в действителност е. Или поне да не изглежда твърде лесно да се стигне до извода, че Мишо се е срещнал с Мими посреднощ миналата сряда и… нали…

Всъщност Apple и Google „водиха битка“ с нагласите на няколко правителства (вкл. европейски), които искаха тези данни да се изпращат централизирано към някакви национални сървъри и да се споделят (уж само) със здравните служби, но евентуално и с полицията, ако се налага да се издирват хора. Дори само последното е безумно потвърждение по колко тънък лед се движим с напъните да се реализират такива идеи.

Но как използваме това, че телефоните ни знаят с кои други телефони сме били наблизо?

Те ще пазят списъка от идентификатори, с които сме се срещали за някакъв период от време (две седмици). Ако междувременно някой от хората, с които сме били в близост, се разболее или си направи тест, който се окаже положителен, той/тя може да отбележи това чрез своето приложение, а неговият телефонен идентификатор (т.е. онова число) ще бъде обявено за обвързано със заразен човек и разпространено до всички устройства в системата. Така ако останалите телефони открият такъв идентификатор в своя локален списък от последните две седмици, ще алармират притежателите си, че са били в близък контакт с болен или заразен, и ще им препоръчат да си направят тест. Без да им казват кой точно е този контакт, защото не знаят това.

Всичко изглежда като умно и работещо решение, което пък може би наистина би помогнало за по-ефективно и бързо проследяване на пътя на заразата и евентуалното ѝ контролиране, без твърде много да заплашва личната неприкосновеност на хората. И щеше да е така при следните условия:

  • ако наистина всичко се случва точно по този начин, както ни го обещават и си го представяме;
  • ако можехме да вярваме, че Apple и Google наистина реализират този алгоритъм добронамерено, без никакви неволни или нарочни грешки в реализацията, или скрити функционалности за други цели;
  • ако bluetooth технологията нямаше несъвършенства, част от които ще разгледаме след малко;
  • ако всички ползватели са добронамерени и коректни, не подават грешна информация или не спестяват такава;
  • ако можеше да се вярва на правителствата, че няма да притискат гражданите си да използват системата против тяхното желание… или че няма да притиснат Google и Apple да променят реализацията си в бъдеще;
  • ако не съществуваха рискове събирането на други данни (напр. от телекомите) да се комбинира по начин, който да разкрива самоличността на хората;
  • ако знаехме повече за механизма на заразяване, който все още е обвит в мъгла от допускания;
  • ако…

Разбира се, че е съблазняваща идеята да използваме модерните технологии, за да намерим изход от ситуацията, в която попаднахме. Евентуално да спасим човешки животи и по-бързо да се започне с възстановяване на икономиката на целия свят.

Само че идеята на Google и Apple крие много подводни камъни и рискове, които неутрализират повечето ползи

Основният проблем е свързан с това, че още не знаем колко точно време един заразен човек може да заразява други здрави хора. Нито сме сигурни за началния или крайния момент на този период. Което прави трудно преценяването доколко е вероятно да сме пипнали вируса, ако сме били в непосредствена близост с човек, който се е оказал заразен няколко дни след нашата среща. Учените още не са напълно сигурни и дали заразата се предава чрез повърхности и предмети или само по въздушно-капков път. Предполага се, че трябва да се позастоим около заразен човек, но не знаем колко точно е това време. Също логично е и да е по-вероятно да се заразим в затворено помещение, отколкото на открито, но… все още и за това няма категорично потвърждение.

И на фона на тези неясноти нека добавим несъвършенствата на bluetooth технологията. Можем да правим обосновани допускания за разстоянието между две устройства, които се „чуват“ по bluetooth, съдейки по затихването на сигнала, понеже силата му намалява пропорционално на разстоянието между тях. Само че сигналът затихва различно, ако телефонът е в ръката, в джоба, в дамска чанта или между двете устройства има стена или някаква друга преграда. Това прави опитът да преценим колко близо са две такива устройства в безбройните възможни делнични ситуации доста условен.

В добавка към това, различните устройства ползват различни като качество чипове – т.е. някои ще си общуват по-добре от други. Някои от по-евтините модели понякога пък трудно се „разбират“ със себеподобни. Отново в зависимост от несъвършенствата на конкретни модели някои остават „свързани“ далеч след като това отдавна не е така. Други пък имат нужда от много дълго време да „открият“, че са във връзка.

Всичко това би изкривило изключително много преценката колко точно време и в каква реална близост сме били с телефона на един или друг човек. И не на последно място – няма как чрез bluetooth да преценим дали сме на закрито и открито. А на опашка на открито при спазване на изискваната дистанция от 1,5-2 метра е много вероятно телефоните ви да решат, че сте достатъчно дълго време и достатъчно близо един до друг.

Нека към това да добавим човешкия фактор. Това, че нашият телефон е регистрирал близост с друг, не означава по никакъв начин, че настина самите хора са били наблизо. Простички ситуации, които са напълно реални:

  • Домът ви или офисът ви са в непосредствена близост до оживен тротоар или още по-зле – до друг офис или магазин. Почти сигурно е, че телефонът ви, оставен на бюрото или на прозореца, ще регистрира множество други устройства на хора, преминаващи по улицата, пазаруващи в магазина или работещи през една стена от вас в съседния офис, с които обаче вие реално може никога да не сте имали никакъв контакт. Дори да филтрираме твърде краткотрайните взаимодействия (преминаващите по тротоара например), пак остават достатъчно поводи да получите фалшиви предупреждения, че сте били в контакт със заразен. Хайде сега си представете, че живеете в съседство на чакалня пред лекарски кабинет, на баничарница или на местната данъчна служба.
  • Имаме и обратната възможна ситуация – куриер ви носи пратка на петия етаж, но си е оставил телефона в буса долу на улицата. Реално имате контакт, особено ако допуснем, че зараза чрез предмети е възможна, но телефоните ви ще пропуснат да регистрират този факт.

Към това нека добавим човешката злонамереност. Какво, мислите, ще възпрепятства някакъв кретен да се разхожда активно насам-натам и да се отбележи след няколко дни като заразен, без това да е вярно? Нищо, разбира се. Само веднъж да го направи, ще провокира фалшива тревожност и ще засили мнозина към самоизолация или тестващи центрове, където не е изключена вероятността да се заразят наистина при евентуална среща с други заразени (или да последва вторична вълна от предупреждения след срещата с техните телефони, които също са били наблизо).

Представяте ли си изживяването, когато един ден се събуждате и на телефона ви изгрява съобщението: „Предупреждаваме ви, че през последните дни сте били в непосредствена близост с човек, който е потвърдил, че е заразен/болен от COVID-19. Съветваме ви да се самоизолирате и/или да се тествате.“ Пожелавам ви спокойни няколко следващи дни… и нощи!

А сега си представете, че това започне да ви се случва през 2-3 дни. Не е изключено – например ако работите на гише в банка, а телефонът ви е бил до вас зад преградата и е насъбрал… „контакти“. Колко пъти ще излезете в неплатен отпуск за самоизолация или ще се тествате? И след колко случая ще започнете да пренебрегвате предупрежденията? А ако точно някое от следващите предупреждения е вярно?

Всъщност при евентуално неблагоприятно развихряне на заразата такава система може да ви подлуди с фалшиви тревоги (окей, може и с истински).

И отново имаме обратния проблем – при непрецизна реализация (а всичко по-горе е изписано, за да обясни, че не е лесно да бъде прецизна) по-опасно от фалшивите тревоги може да е фалшивото спокойствие. Нека не забравяме, че не всички хора ще участват в тази платформа – и не за всички въпросът дали да го направят е свързан единствено с тяхното желание. Някои може и да нямат възможност.

Миналата година излязоха доста оптимистични резултати от проучване, според което 97% от българите имат мобилен телефон, а 74% от тях пък ползват смартфон (доколко го ползват наистина като смартфон, е друга тема). Има един закон на Metcalfe, с който се оценява т.нар. „мрежови ефект“ на една комуникационна мрежа – той гласи, че този ефект е пропорционален на квадрата на броя на свързаните устройства. В резюме от него следва, че дори всички, които имат смартфони, да си инсталират приложението и то да работи перфектно, не можем да достигнем дори до 50-60% от събитията, при които е възможно да е прескочила зараза.

Човешката злонамереност е проблем, но да не забравяме и човешката немарливост. Каква би била мотивацията на някого да признае пред някакво приложение, че е заразен или болен от COVID-19? Грижата за другите? А ако това е свързано с притеснения за стигматизиране или реална заплаха да загуби прехраната си? А ако просто е твърде уплашен и изобщо не му е до другите, или се престраши да рапортува чак 5-6 или 10 дни след теста си, а междувременно данните за близостта с неговия телефон в голяма част от телефоните на останалите вече са заличени, защото е изминало критично време?

Вирусът е коварен – няма две мнения. Но колкото и да изглежда страшно, че коефицентът му на препредаване е висок, все пак засега изглежда, че един болен не заразява средно повече от двама-трима други и при отсъствие на мерки за дистанциране (това всъщност е адски много, но и за реалния коефициент също не сме напълно сигурни, понеже данните, с които се борави към момента, не са прецизни). Има анализи, които твърдят, че среднодневните контакти на средностатистически човек са около 12. Т.е. е много вероятно да срещнем заразен в ежедневието си, но далеч по-малко вероятно е наистина да се заразим при тази среща. Иначе казано, системата за проследяване е предварително дефинирана като такава, която с цел превенция ще прекалява с фалшивите предупреждения.

Вероятните злоупотреби с тази технология никак не са малко. Първо на Apple, а особено на Google няма никаква причина да се има каквото и да било доверие. Те обещават, че ще забраняват на разработчиците, които ползват тази технология, да я комбинират с други, съчетанието между които би могло да разкрие самоличността на хората, но… и това е заобиколимо. Простичък и относително лесно осъществим подход е да монтираме на оживено място, примерно до касата на един магазин два телефона – единият е в анти-COVID-19 системата и събира „контакти“, а другият е с пусната камера и прави снимки или записва видео.

Колко е трудно да се съпоставят лицата на хората пред касата със събраните идентификатори от другия телефон на база на времето, в което са били там? Ами елементарно е. Ако пък са ползвали карта за лоялност и отстъпка, даже имената, адресът и мобилният им телефон ще са в базата на магазина. Да, да… знам че това би било нарушение на закона и злоупотреба с GDPR, но идете го доказвайте, ако ви се случи. На мен ми отне цяла година за институционално признание, че някой очевидно е фалшифицирал мой подпис.

Ако се върнем към възможните злоупотреби, може и да поразсъждваме колко лесно е да се създаде чрез манипулация на системата „огнище на зараза“ в заведението или магазина на конкурент?

Apple и Google обещават, че всеки ще може да активира или деактивира тази функционалност. Проблемът е, че в една от предварителните бета-версии на следващото обновление на iOS бе забелязано, че това е включено по подразбиране. Макар и да е нужно допълнително оторизирано приложение, за да се събират данни, това задава лош наклон на пързалката. (Точно преди малко Apple коригираха това и сега е изключено по подразбиране.) Никой не би могъл да гарантира, че тази технология няма да „остане“ в телефоните и след края на кризата с новия коронавирус. Или че няма да се измисли друго нейно приложение с още по-неприятни ефекти върху личната неприкосновеност. Със сигурност никой не следва да бъде принуждаван да го ползва.

Затова хайде по-внимателно с технооптимизма! Проследяването на контактите по класическия начин може да е бавно и да изглежда неефективно, но е прецизно, докато аутсорсването на този ангажимент по неудачен начин на някаква си технология, която по замисъл има съвсем друго предназначение, води до купчина нови проблеми. При това застрашаващи човешки права, личната неприкосновеност на хората и е с повишен риск за тези от малцинствени, стигматизирани или маргинализирани групи. Още повече че тази идея пропълзя от страни като Израел, Южна Корея и Сингапур, никоя от които не може да се посочи за пример по отношение на човешките права.

Технологични идеи и ресурси в момента са нужни на учените, които търсят ваксина и лечение. Има нужда от софтуерни решения за здравните системи по света. У нас например още няма електронни здравни картони, нито електронни рецепти. Те ще са полезни и след пандемията. Технозаигравките с инструменти за проследяване на контактите между хората са опасни по презумпция, съмнително е, че изобщо ще свършат някаква работа и заслужават всяка съпротива срещу тях.

Допълнено на 8 май 2020:

Вчера, сякаш в подкрепа на написаното по-горе беше публикуван кода на приложенията за iOS и Android, които ще се ползват във Великобритания. И какво се вижда на първо четене (via Aral Balkan):

  • Публикуван е изходния код само на мобилните приложения, без този на сървъра, а всъщност е далеч по-важно да се знае какво се случва там. Иначе казано това изглежда като хитър PR ход, който дава възможност на недоклатени бюрократчета да твърдят, че „кодът е публикуван“ и да заблуждават народонаселението, че всичко е прозрачно, а то не е.
  • Всъщност, без допълнителен независим одит не може да се потвърди, че приложенията, които ще бъдат разпространени за използване от хората, наистина ще бъдат компилирани от точно този публикуван код. Може да се публикува едно, а в действителност да се използва друго.
  • От публикувания код личи, че приложенията събират марката, модела и UUID идентификаторите на телефоните, което отваря врати за деанонимизиране на потребителя.
  • Приложенията използват база-данни Firebase на Google в облака. Честито! Данните отиват точно където трябва и където най-лесно могат да бъдат съпоставени с други, деанонимизирани, анализирани, профилирани и т.н.
  • И понеже това не стига, приложенията ползват и Microsoft Analytics, за да може и още един tech-гигант да се отърка. Още веднъж честито! И наздраве!

Допълнено на 13 юни 2020: Ами… не работело добре, споделят британците.

Иначе казано, продължавайте да се предоверявате на технооптимистите и на шибаните правителства!

How to work from home with Raspberry Pi | The Magpi 93

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/how-to-work-from-home-with-raspberry-pi-the-magpi-93/

If you find yourself working or learning, or simply socialising from home, Raspberry Pi can help with everything from collaborative productivity to video conferencing. Read more in issue #92 of The MagPi, out now.

01 Install the camera

If you’re using a USB webcam, you can simply insert it into a USB port on Raspberry Pi. If you’re using a Raspberry Pi Camera Module, you’ll need to unpack it, then find the ‘CAMERA’ port on the top of Raspberry Pi – it’s just between the second micro-HDMI port and the 3.5mm AV port. Pinch the shorter sides of the port’s tab with your nails and pull it gently upwards. With Raspberry Pi positioned so the HDMI ports are at the bottom, insert one end of the camera’s ribbon cable into the port so the shiny metal contacts are facing the HDMI port. Hold the cable in place, and gently push the tab back home again.

If the Camera Module doesn’t have the ribbon cable connected, repeat the process for the connector on its underside, making sure the contacts are facing downwards towards the module. Finally, remove the blue plastic film from the camera lens.

02 Enable Camera Module access

Before you can use your Raspberry Pi Camera Module, you need to enable it in Raspbian. If you’re using a USB webcam, you can skip this step. Otherwise, click on the raspberry menu icon in Raspbian, choose Preferences, then click on Raspberry Pi Configuration.

When the tool loads, click on the Interfaces tab, then click on the ‘Enabled’ radio button next to Camera. Click OK, and let Raspberry Pi reboot to load your new settings. If you forget this step, Raspberry Pi won’t be able to communicate with the Camera Module.

03 Set up your microphone

If you’re using a USB webcam, it may come with a microphone built-in; otherwise, you’ll need to connect a USB headset, a USB microphone and separate speakers, or a USB sound card with analogue microphone and speakers to Raspberry Pi. Plug the webcam into one of Raspberry Pi’s USB 2.0 ports, furthest away from the Ethernet connector and marked with black plastic inners.

Right-click on the speaker icon at the top-right of the Raspbian desktop and choose Audio Inputs. Find your microphone or headset in the list, then click it to set it as the default input. If you’re using your TV or monitor’s speakers, you’re done; if you’re using a headset or separate speakers, right-click on the speaker icon and choose your device from the Audio Outputs menu as well.

04 Set access permissions

Click on the Internet icon next to the raspberry menu to load the Chromium web browser. Click in the address box and type hangouts.google.com. When the page loads, click ‘Sign In’ and enter your Google account details; if you don’t already have a Google account, you can sign up for one free of charge.

When you’ve signed in, click Video Call. You’ll be prompted to allow Google Hangouts to access both your microphone and your camera. Click Allow on the prompt that appears. If you Deny access, nobody in the video chat will be able to see or hear you!

05 Invite friends or join a chat

You can invite friends to your video chat by writing their email address in the Invite People box, or copying the link and sending it via another messaging service. They don’t need their own Raspberry Pi to participate – you can use Google Hangouts from a laptop, desktop, smartphone, or tablet. If someone has sent you a link to their video chat, open the message on Raspberry Pi and simply click the link to join automatically.

You can click the microphone or video icons at the bottom of the window to temporarily disable the microphone or camera; click the red handset icon to leave the call. You can click the three dots at the top-right to access more features, including switching the chat to full-screen view and sharing your screen – which will allow guests to see what you’re doing on Raspberry Pi, including any applications or documents you have open.

06 Adjust microphone volume

If your microphone is too quiet, you’ll need to adjust the volume. Click the Terminal icon at the upper-left of the screen, then type alsamixer followed by the ENTER key. This loads an audio mixing tool; when it opens, press F4 to switch to the Capture tab and use the up-arrow and down-arrow keys on the keyboard to increase or decrease the volume. Try small adjustments at first; setting the capture volume too high can cause the audio to ‘clip’, making you harder to hear. When finished, press CTRL+C to exit AlsaMixer, then click the X at the top-right of the Terminal to close it.

Adjust your audio volume settings with the AlsaMixer tool

Work online with your team

Just because you’re not shoulder-to-shoulder with colleagues doesn’t mean you can’t collaborate, thanks to these online tools.

Google Docs

Google Docs is a suite of online productivity tools linked to the Google Drive cloud storage platform, all accessible directly from your browser. Open the browser and go to drive.google.com, then sign in with your Google account – or sign up for a new account if you don’t already have one – for 15GB of free storage plus access to the word processor Google Docs, spreadsheet Google Sheets, presentation tool Google Slides, and more. Connect with colleagues and friends to share files or entire folders, and collaborate within documents with simultaneous multi-user editing, comments, and change suggestions.


Designed for business, Slack is a text-based instant messaging tool with support for file transfer, rich text, images, video, and more. Slack allows for easy collaboration in Teams, which are then split into multiple channels or rooms – some for casual conversation, others for more focused discussion. If your colleagues or friends already have a Slack team set up, ask them to send you an invite; if not, you can head to app.slack.com and set one up yourself for free.


Built more for casual use, Discord offers live chat functionality. While the dedicated Discord app includes voice chat support, this is not yet supported on Raspberry Pi – but you can still use text chat by opening the browser, going to discord.com, and choosing the ‘Open Discord in your browser’ option. Choose a username, read and agree to the terms of service, then enter an email address and password to set up your own free Discord server. Alternatively, if you know someone on Discord already, ask them to send you an invitation to access their server.

Firefox Send

If you need to send a document, image, or any other type of file to someone who isn’t on Google Drive, you can use Firefox Send – even if you’re not using the Firefox browser. All files transferred via Firefox Send are encrypted, and can be protected with an optional password, and are automatically deleted after a set number of downloads or length of time. Simply open the browser and go to send.firefox.com; you can send files up to 1GB without an account, or sign up for a free Firefox account to increase the limit to 2.5GB.


For programmers, GitHub is a lifesaver. Based around the Git version control system, GitHub lets teams work on a project regardless of distance using repositories of source code and supporting files. Each programmer can have a local copy of the program files, work on them independently, then submit the changes for inclusion in the master copy – complete with the ability to handle conflicting changes. Better still, GitHub offers additional collaboration tools including issue tracking. Open the browser and go to github.com to sign up, or sign in if you have an existing account, and follow the getting started guide on the site.

Read The MagPi for free!

Find more fantastic projects, tutorials, and reviews in The MagPi #93, out now! You can get The MagPi #92 online at our store, or in print from all good newsagents and supermarkets. You can also access The MagPi magazine via our Android and iOS apps.

Don’t forget our super subscription offers, which include a free gift of a Raspberry Pi Zero W when you subscribe for twelve months.

And, as with all our Raspberry Pi Press publications, you can download the free PDF from our website.

The post How to work from home with Raspberry Pi | The Magpi 93 appeared first on Raspberry Pi.

Lassie errors

Post Syndicated from esr original http://esr.ibiblio.org/?p=8674

I didn’t invent this term, but boosting the signal gives me a good excuse for a rant against its referent.

Lassie was a fictional dog. In all her literary, film, and TV adaptations the most recurring plot device was some character getting in trouble (in the print original, two brothers lost in a snowstorm; in popular false memory “Little Timmy fell in a well”, though this never actually happened in the movies or TV series) and Lassie running home to bark at other humans to get them to follow her to the rescue.

In software, “Lassie error” is a diagnostic message that barks “error” while being comprehensively unhelpful about what is actually going on. The term seems to have first surfaced on Twitter in early 2020; there is evidence in the thread of at least two independent inventions, and I would be unsurprised to learn of others.

In the Unix world, a particularly notorious Lassie error is what the ancient line-oriented Unix editor “ed” does on a command error. It says “?” and waits for another command – which is especially confusing since ed doesn’t have a command prompt. Ken Thompson had an almost unique excuse for extreme terseness, as ed was written in 1973 to run on a computer orders of magnitude less capable than the embedded processor in your keyboard.

Herewith the burden of my rant: You are not Ken Thompson, 1973 is a long time gone, and all the cost gradients around error reporting have changed. If you ever hear this term used about one of your error messages, you have screwed up. You should immediately apologize to the person who used it and correct your mistake.

Part of your responsibility as a software engineer, if you take your craft seriously, is to minimize the costs that your own mistakes or failures to anticipate exceptional conditions inflict on others. Users have enough friction costs when software works perfectly; when it fails, you are piling insult on that injury if your Lassie error leaves them without a clue about how to recover.

Really this term is unfair to Lassie, who as a dog didn’t have much of a vocabulary with which to convey nuances. You, as a human, have no such excuse. Every error message you write should contain a description of what went wrong in plain language, and – when error recovery is possible – contain actionable advice about how to recover.

This remains true when you are dealing with user errors. How you deal with (say) a user mistake in configuration-file syntax is part of the user interface of your program just as surely as the normally visible controls are. It is no less important to get that communication right; in fact, it may be more important – because a user encountering an error is a user in trouble that he needs help to get out of. When Little Timmy falls down a well you constructed and put in his path, your responsibility to say something helpful doesn’t lessen just because Timmy made the immediate mistake.

A design pattern I’ve seen used successfully is for immediate error messages to include both a one-line summary of the error and a cookie (like “E2317”) which can be used to look up a longer description including known causes of the problem and remedies. In a hypothetical example, the pair might look like this:

Out of memory during stream parsing (E1723)

E1723: Program ran out of memory while building the deserialized internal representation of a stream dump. Try lowering the value of GOGC to cause more frequent garbage collections, increasing the size if your swap partition, or moving to hardware with more RAM.

The key point here is that the user is not left in the lurch. The messages are not a meaningless bark-bark, but the beginning of a diagnosis and repair sequence.

If the thought of improving user experience in general leaves you unmoved, consider that the pain you prevent with an informative error message is rather likely to be your own, as you use your software months or years down the road or are required to answer pesky questions about it.

As with good comments in your code, it is perhaps most motivating to think of informative error messages as a form of anticipatory mercy towards your future self.

Payload, singleton, and stride lengths

Post Syndicated from esr original http://esr.ibiblio.org/?p=8663

Once again I’m inventing terms for useful distinctions that programmers need to make and sometimes get confused about because they lack precise language.

The motivation today is some issues that came up while I was trying to refactor some data representations to reduce reposurgeon’s working set. I realized that there are no fewer than three different things we can mean by the “length” of a structure in a language like C, Go, or Rust – and no terms to distinguish these senses.

Before reading these definitions, you might to do a quick read through The Lost Art of Structure Packing.

The first definition is payload length. That is the sum of the lengths of all the data fields in the structure.

The second is stride length. This is the length of the structure with any interior padding and with the trailing padding or dead space required when you have an array of them. This padding is forced by the fact that on most hardware, an instance of a structure normally needs to have the alignment of its widest member for fastest access. If you’re working in C, sizeof gives you back a stride length in bytes.

I derived the term “stride length” for individual structures from a well-established traditional use of “stride” for array programming in PL/1 and FORTRAN that is decades old.

Stride length and payload length coincide if the structure has no interior or trailing padding. This can sometimes happen when you get an arrangement of fields exactly right, or your compiler might have a pragma to force tight packing even though fields may have to be accessed by slower multi-instruction sequences.

“Singleton length” is the term you’re least likely to need. It’s the length of a structure with interior padding but without trailing padding. The reason I’m dubbing it “singleton” length is that it might be relevant in situations where you’re declaring a single instance of a struct not in an array.

Consider the following declarations in C on a 64-bit machine:

struct {int64_t a; int32_t b} x;
char y

That structure has a payload length of 12 bytes. Instances of it in an array would normally have a stride length of 16 bytes, with the last two bytes being padding. But in this situation, with a single instance, your compiler might well place the storage for y in the byte immediately following x.b, where there would trailing padding in an array element.

This struct has a singleton length of 12, same as its payload length. But these are not necessarily identical, Consider this:

struct {int64_t a; char b[6]; int32_t c} x;

The way this is normally laid out in memory it will have two bytes of interior padding after b, then 4 bytes of trailing padding after c. Its payload length is 8 + 6 + 4 = 20; its stride length is 8 + 8 + 8 = 24; and its singleton length is 8 + 6 + 2 + 4 = 22.

To avoid confusion, you should develop a habit: any time someone speaks or writes about the “length” of a structure, stop and ask: is this payload length, stride length, or singleton length?

Most usually the answer will be stride length. But someday, most likely when you’re working close to the metal on some low-power embedded system, it might be payload or singleton length – and the difference might actually matter.

Even when it doesn’t matter, having a more exact mental model is good for reducing the frequency of times you have to stop and check yourself because a detail is vague. The map is not the territory, but with a better map you’ll get lost less often.

shellcheck: boosting the signal

Post Syndicated from esr original http://esr.ibiblio.org/?p=8622

I like code-validation tools, because I hate defects in my software and I know that there are lots of kinds of defects that are difficult for an unaided human brain to notice.

On my projects, I throw every code validater I can find at my code. Standbys are cppcheck for C code, pylint for Python, and go lint for Go code. I run these frequently – usually they’re either part of the “make check” I use to run regression tests, or part of the hook script run when I push changes to the public repository.

A few days ago I found another validator that I now really like: shellcheck Yes, it’s a lint/validator for shell scripts – and in retrospect shell, as spiky and irregular and suffused with multilevel quoting as it is, has needed something like this for a long time.

I haven’t done a lot of shell scripting in the last couple of decades. It’s not a good language for programming at larger orders of magnitude than 10 lines or so – too many tool dependencies, too difficult to track what’s going on. These problems are why Perl and later scripting language became important; if shell had scaled up better the space they occupy would have been shell code as far as they eye can see.

But sometimes you write a small script, and then it starts to grow, and you can end up in an awkward size range where it isn’t quite unmanageable enough to drive you to port it to (say) Python yet. I have some cases like this in the reposurgeon suite.

For this sort of thing a shell validater/linter can be a real boon, enabling you to have much more confidence that you’ll catch silly errors when you modify the script, and actually increasing the upper limit of source-line count at which shell remains a viable programming language.

So it is an excellent thing that shellcheck is a solid and carefully-thought-out piece of work. It does catch lot of nits and potential errors, hardening your script against cases you probably haven’t tested yet. For example. it’s especially good at flagging constructs that will break if a shell variable like $1 gets set to a value with embedded whitspace.

It has other features you want in a code validator, too. You can do line-by-line suppression of specific spellcheck warnings with magic comments, telling the tool “Yes, I really meant to do that” so it will shut up. This means when you get new warnings they are really obvious.

Also, it’s fast. Fast enough that you can run it on all your shellscripts up front of all your regular regression tests and probably barely ever notice the time cost.

It’s standard practice for me to have a “make check” that runs code validators and then the regression tests. I’m going back and adding shellcheck validation to those check productions on all my projects that ship shell scripts. I recommend this as a good habit to everybody.

Reposurgeon defeats all monsters!

Post Syndicated from esr original http://esr.ibiblio.org/?p=8607

On January 12th 2020, reposurgeon performed a successful conversion of its biggest repository ever – the entire history of the GNU Compiler Collection, 280K commits with a history stretching back through 1987. Not only were some parts CVS, the earliest portions predated CVS and had been stored in RCS.

I waited this long to talk about it to give the dust time to settle on the conversion. But it’s been 5 weeks now and I’ve heard nary a peep from the GCC developers about any problems, so I think we can score this as reposurgeon’s biggest victory yet.

The Go port really proved itself. Those 280K commits can be handled on the 128GB Great Beast with a load time of about two hours. I have to tell the Go garbage collector to be really aggressive – set GOGC=30 – but that’s exactly what GOGC is for.

The Go language really proved itself too. The bet I made on it a year ago paid off handsomely – the increase in throughput from Python is pretty breathtaking, at least an order of magnitude and would have been far more if it weren’t constrained by the slowness of svnadmin dump. Some of that was improved optimization of the algorithms – we knocked out one O(n**2) after translation. More of it, I think, was the combined effect of machine-code speed and much smaller object sizes – that reduced working set a great deal, meaning cache miss penalties got less frequent.

Also we got a lot of speedup out of various parallelization tricks. This deserves mention because Go made it so easy. I wrote – and Julien Rivaud later improved – a function that would run a specified functional hook on the entire set of repository events, multithreading them from a worker pool optimally sized from your machine’s number of processors, or (with the “serialize” debug switch on) running them serially.

That is 35 lines of easily readable code in Go, and we got no fewer than 9 uses out of it in various parts of the code! I have never before used a language in which parallelism is so easy to manage – Go’s implementation of Communicating Sequential Processes is nothing short of genius and should be a model for how concurrency primitives are done in future languages.

Thanks where thanks are due: when word about the GCC translation deadline got out, some of my past reposurgeon contributors – notably Edward Cree, Daniel Brooks, and Julien Rivaud – showed up to help. These guys understood the stakes and put in months of steady, hard work along with me to make the Go port both correct and fast enough to be a practical tool for a 280K-commit translation. Particular thanks to Julien, without whose brilliance and painstaking attention to detail I might never have gotten the Subversion dump reader quite correct.

While I’m giving out applause I cannot omit my apprentice Ian Bruene, whose unobtrusively excellent work on Kommandant provided a replacement for the Cmd class I used in the original Python. The reposurgeon CLI wouldn’t work without it. I recommend it to anyone else who needs to build a CLI in Go.

These guys exemplify the best in what open-source collegiality can be. The success of the GCC lift is almost as much their victory as it is mine.

Build engines suck. Help GPSD select a new one.

Post Syndicated from esr original http://esr.ibiblio.org/?p=8581

One of the eternal mysteries of software is why build engines suck so badly.

Makefiles weren’t terrible as a first try, except for the bizarre decision to make the difference between tabs and spaces critically different so you can screw up your recipe invisibly.

GNU autotools is a massive, gnarly, hideous pile of cruft with far too many moving parts. Troubleshooting any autotools recipe of nontrivial capacity will make you want to ram your forehead repeatedly into a brick wall until the pain stops.

Scons, among the first of the new wave of build engines to arise when mature scripting languages make writing them easier, isn’t bad. Except for the part where the development team was unresponsive at the best of times and development has stagnated for years now.

Waf is also not bad, except for being somewhat cryptic to write and having documentation that would hardly be less comprehensible if it had been written in hieroglyphics.

cmake and meson, two currently popular engines, share one of the (many) fatal flaws of autotools. They don’t run recipes directly, they compile recipes to some simpler form to be run by a back-end builder (typically, but not always, such systems generate makefiles). Recipe errors thrown by the back end don’t necessarily have any direct relationship to an identifiable part of the recipe you give your front end, making troubleshooting unnecessarily painful and tedious.

Wikipedia has a huge list of build-automation systems. Most seem to be relatively new (last decade) and if any of them had distinguished itself enough to be a cler winning choice I’m pretty sure I’d already know it.

Scons is where I landed after getting fed up with GPSD’s autotools build. In retrospect it is still probably the best choice we could have made at the time (around 2006 IIRC), and I think I would be very happy with it if it had lived up to its early promise. It didn’t. Time to move on.

Waf is what NTPsec uses, and while it has served us well the abysmally bad state of the documentation and the relatively high complexity cost of starting a recipe from a blank sheet of paper make it a questionable choice going forward. It doesn’t help that the project owner, while brilliant, does not communicate with lesser mortals very well.

Is there anything in this space that doesn’t have awful flaws?

Here’s what we want:

– works on any reasonable basically-POSIX system for bulding C programs

– there is no requirement for native Windows builds, and perhaps no requirement for Windows at all.

– has a relatively compact expression of build rules, which more or less means declarative notation rather than writing code to do builds

– has a track record of being maintained, and enough usage by other projects that there is a reasonable expectation that the build system will continue to be a reasonable choice for at least 5 years, and ideally longer

– doesn’t impose difficult requirements beyond what gpsd does already (e.g., needing C++11 for a build tool would be bad)

– has a notion of feature tests, rather than ifdefing per operating system, and promotes a culture of doing it that way

– supports setting build prefix, and enabling/disabling options, etc.

– supports dealing with -L/-R flags as expected on a variety of systems

– supports running tests

– supports running the just-built code before installing so that it uses the built libs and does not pick up the already-in-system libs (necessary for tests)

– supports cross compiling

– supports building placing the objects in a specified location outside the source tree

– supports creating distfiles (suffisient if it can shell out to tar commands).

– supports testing the created distfile by unpacking it, doing an out-of-source build and then running tests

– is not a two-phase generator system like autotools, cmake, and meson.

What in Wikipedia’s huge list should we be looking at?

30 Days in the Hole

Post Syndicated from esr original http://esr.ibiblio.org/?p=8543

Yes, it’s been a month since I posted here. To be more precise, 30 Days in the Hole – I’ve been heads-down on a project with a deadline which I just barely met. and then preoccupied with cleanup from that effort.

The project was reposurgeon’s biggest conversion yet, the 280K-commit history of the Gnu Compiler Collection. As of Jan 11 it is officially lifted from Subversion to Git. The effort required to get that done was immense, and involved one hair-raising close call.

I was still debugging the Go translation of the code four months ago when the word came from the GCC team that they has a firm deadline of December 16 to choose between reposurgeon and a set of custom scripts written by a GCC hacker named Maxim Kyurkov. Which I took a look at – and promptly recoiled from in horror.

The problem wasn’t the work of Kyurkov himself; his scripts looked pretty sane to me, But they relied on git-svn, and that was very bad. It works adequately for live gatewaying to a Subversion repository, but if you use it for batch conversions it has any number of murky bugs including a tendency to badly screw up the location of branch joins.

The problem I was facing was that Kyurkov and the GCC guys, never having had their noses rubbed in these problems as I had, might be misled by git-svn’s surface plausibility into using it, and winding up with a subtly damaged conversion and increased friction costs for the rest of time. To head that off, I absolutely had to win on 16 Dec.

Which wan’t going to be easy. My Subversion dump analyzer had problems of it own. I had persistent failures on some particularly weird cases in my test suite, and the analyzer itself was a hairball that tended to eat RAM at prodigious rates. Early on, it became apparent that the 128GB Great Beast II was actually too small for the job!

But a series of fortunate occurrences followed. One was that friend at Amazon was able to lend me access to a really superpowered load machine with 512GB. The second and much more important was in mid-October when a couple of occasional reposurgeon contributors, Julien “__FrnchFrgg__” Rivaud and Daniel Brooks showed up to help – Daniel having wangled his boss’s permission to go full-time on this until it was done. (His boss;s who;e company is critically depended on GCC flourishing.)

Many, many hours of hard work followed – profiling, smashing out hidden O(n**2) loops that exploded on a repo this size, reducing working set, fixing analyzer bugs. I doubled my lifetime consumption of modafinil. And every time I scoped what was left to do I came up with the same answer: we would just barely make the deadline. Probably.

Until…until I had a moment of perspective after three week of futile attempts to patch the latest round of Subversion-dump analyzer bugs and realized that trying to patch-and-kludge my way around the last 5% of weird cases was probably not going to work. The code had become a rubble pile; I couldn’t change anything without breaking anything.

It looked like time to scrap everything downstream of the first-stage stream parser (the simplest part, and the only one I was completely sure was correct) and rebuild the analyzer from first principles using what I had learned from all the recent failures.

Of course the risk I was taking was that come deadline time the analyzer wouldn’t be 95% right but rather catastrophically broken – that there simply wouldn’t be time to get the cleaner code working and qualified. But after thinking about the odds a great deal, I swallowed hard and pulled the trigger on a rewrite.

I made the fateful decision on 29 Nov 2019 and as the Duke of Wellington famously said, “It was a damned near-run thing.” If I had waited even a week longer to pull that trigger, we would probably have failed.

Fortunately, what actually happened was this: I was able to factor the new analyzer into a series of passes, very much like code-analysis phases in a compiler. The number fluctuated, there ended up being 14 of them, but – and this is the key point – each pass was far simpler than the old code, and the relationships between then well-defined. Several intermediate state structures that had become more complication than help were scrapped.

Eventually Julien took over two of the trickier intermediate passes so I could concentrate on the worst of the bunch. Meanwhile, Daniel was unobtrusively finding ways to speed the code and slim its memory usage down. And – a few days before the deadline – the GCC project lead and a sidekick showed up on our project channel to work on improving the conversion recipe.

After that formally getting the nod to do the conversion was not a huge surprise. But there was a lot of cleanup, verification, and tuning to be done before the official repository cutover on Jan 11. What with one thing and another in was Jan 13 before I could declare victory and ship 4.0.

After which I promptly…collapsed. Having overworked myself, I picked up a cold. Normally for me this is no big deal; I sniffle and sneeze for a few days and it barely slows me down. Not this time – hacking cough, headaches, flu-like symptoms except with no fever at all, and even the occasional dizzy spell because the trouble spread to my left ear canal.

I’m getting better now. But I had planned to go to the big pro-Second Amendment demonstration in Richmond on Jan 20th and had to bail at the last minute because I was too sick to travel.

Anyway, the mission got done. GCC has a really high-quality Git repository now. And there will be a sequel to this, my first GCC compiler mod.

And posting at something like my usual frequency will resume. I have a couple of topics queued up.

Segfaults and Twitter monkeys: a tale of pointlessness

Post Syndicated from esr original http://esr.ibiblio.org/?p=8394

For a few years in the 1990s, when PNG was just getting established as a Web image format, I was a developer on the libpng team.

One reason I got involved is that the compression patent on GIFs was a big deal at the time. I had been the maintainer of GIFLIB since 1989; it was on my watch that Marc Andreesen chose that code for use in the first graphics-capable browser in ’94. But I handed that library off to a hacker in Japan who I thought would be less exposed to the vagaries of U.S. IP law. (Years later, after the century had turned and the LZW patents expired, it came back to me.)

Then, sometime within a few years of 1996, I happened to read the PNG standard, and thought the design of the format was very elegant. So I started submitting patches to libpng and ended up writing the support for six of the minor chunk types, as well as implementing the high-level interface to the library that’s now in general use.

As part of my work on PNG, I volunteered to clean up some code that Greg Roelofs had been maintaining and package it for release. This was “gif2png” and it was more or less the project’s official GIF converter.

(Not to be confused, though, with the GIFLIB tools that convert to and from various other graphics formats, which I also maintain. Those had a different origin, and were like libgif itself rather better code.)

gif2pngs’s role then was more important than it later became. ImageMagick already existed, but not in anything like its current form; GIMP had barely launched, and the idea of a universal image converter hadn’t really taken hold yet. The utilities I ship with GIFLIB also had an importance then that they would later lose as ImageMagick’s “convert” became the tool everyone learned to reach for by reflex.

It has to be said that gif2png wasn’t very good code by today’s standards. It had started life in 1995 as a dorm-room project written in journeyman C, with a degree of carelessness about type discipline and bounds checking that was still normal in C code of the time. Neither Greg nor I gave it the thorough rewrite it perhaps should have gotten because, after all, it worked on every well-formed GIF we ever threw at it. And we had larger problems to tackle.

Still, having taken responsibility for it in ’99. I kept it maintained even as it was steadily decreasing in importance. ImageMagick convert(1) had taken over; I got zero bug reports or RFEs for six years between 2003 and 2009.

I did some minor updating in 2010, but more out of completism than anything else; I was convinced that the user constituency for the tool was gone. And that was fine with me – convert(1) had more eyes on it and was almost certainly better code. So gif2png fell to near the bottom of my priority list and stayed there.

A few years after that, fuzzer attacks on programs started to become a serious thing. I got one against GIFLIB, which was issued a CVE and I took very seriously – rogue code execution in a ubiquitous service library is baaaad. A couple of others in GIFLIB’s associated utility programs, which I took much less seriously as I wasn’t convinced anyone still used them at all. You’re going to exploit these…how?

And, recently, two segfaults in gif2png. Which was absolutely at the bottom of my list of security concerns. Standalone program, designed to be used in input files you trust to be reasonably close to well-formed GIFs (there was a ‘recover’ option that could salvage certain malformed ones if you were very lucky). Next to no userbase since around 2003. Again, you’re going to exploit this…how?

Now, I’m no infosec specialist, but there is real-world evidence that I know how to get my priorities right. I’ve led the the NTPsec project for nearly five years now, reworking its code so thoroughly that its size has shrunk by a factor of 4. NTP implementations are a prime attack target because the pre-NTPsec reference version used to be so easy to subvert. And you know what the count of CVEs against our code (as opposed to what we inherited) is?

Zero. Zip. Zilch. Nobody has busted my code or my team’s. Despite half the world’s academics and security auditors running attacks on it. Furthermore, we have a record of generally having plugged about four out of five CVEs in the legacy code by the time they’re issued.

That’s how the security of my code looks when I think it’s worth the effort. For GIFLIB I’ll spend that effort willingly. For the GIFLIB tools, less willingly. But for gif2png, that seemed pointless. I was tired of spending effort to deal with the 47,000th CS student thinking “I know! I’ll run a fuzzer on !” and thinking a crash was a big deal when the program was a superannuated standalone GIF filter that hasn’t seen any serious use since J. Random Student was in diapers.

So two days ago I marked two crashes on malformed input in gif2png won’t-fix, put in in a segfault handler so it would die gracefully no matter what shit you shoved at it, and shipped it…

…only to hear a few ours later, from my friend Perry Metzger, that there was a shitstorm going down on Twitter about how shockingly incompetent this was.

Really? They really thought this program was an attack target, and that you could accomplish anything by running rogue code from inside it?

Narrator voice: No, they didn’t. There are some people for whom any excuse to howl and fling feces will do.

A similar bug in libgif or NTPsec would have been a serious matter. But I’m pretty good at not allowing serious bugs to happen in those. In a quarter century of writing critical service code my CVE count is, I think, two (one long ago in fetchmail) with zero exploits in the wild.

This? This ain’t nothin’. Perry did propose a wildly unlikely scenario in which the gif2png binary somehow got wedged in the middle of somebody’s web framework on a server and allowed to see ill-formed input, allowing a remote exploit, but I don’t believe it.

Alas, if I’ve learned anything about living on the modern Internet it’s that arguing that sort of point with the howler monkeys on Twitter is a waste of time. (Actually, arguing anything with the howler monkeys on Twitter is a waste of time.) Besides, the code may not be an actual security hazard, but it has been kind of embarrassing to drag around ever since I picked it up.

So, rather than patch the C and deal with yet another round of meaningless fuzzer bugs in the future, I’ve rewritten it in Go. Here it is, and now that it’s in a type-safe language with access bounds checking I don’t ever have to worry about that class of problem again.

One good thing may come of this episode (other than lifting code out of C, which is always a plus). I notice that the GIF and PNG libraries in Go are, while serviceable for basic tasks, rather limited. You can convert with them, but you can’t do lossless editing with them. Neither one deserializes the entire ontology of its file format.

As the maintainer of GIFLIB and a past libpng core developer, I don’t know where I’d find a better-qualified person to fix this than me. So now on my to-do list, though not at high priority: push some patches upstream to improve these libraries.

Fear of COMITment

Post Syndicated from Eric Raymond original http://esr.ibiblio.org/?p=8375

I shipped the first release of another retro-language revival today: COMIT. Dating from 1957 (coincidentally the year I was born) this was the first string-processing language, ancestral to SNOBOL and sed and ed and Unix shell. One of the notational conventions invented in COMIT, the use of $0, $1…etc. as substitution variables, survives in all these languages.

I actually wrote most of the interpreter three years ago, when a copy of the COMIT book fell into my hands (I think A&D regular Daniel Franke was responsible for that). It wasn’t difficult – 400-odd lines of Python, barely enough to make me break a sweat. That is, until I hit the parts when the book’s description of the language is vague and inadequate.

It was 1957 and nobody knew hardly anything about how to describe computer language systematically, so I can’t fault Dr. Victor Yngve too harshly. Where I came a particular cropper was trying to understand the intended relationship between indices into the workspace buffer and “right-half relative constituent numbers”. That defeated me, so I went off and did other things.

Over the last couple days, as part of my effort to promote my Patreon feed to a level where my medical expenses are less seriously threatening, I’ve been rebuilding all my project pages to include a Patreon button and an up-to-date list of Bronze and Institutional patrons. While doing this I tripped over the unshipped COMIT code and pondered what to do with it.

What I decided to do was ship it with an 0.1 version number as is. The alternative would have been to choose from several different possible interpretations of the language book and quite possibly get it wrong.

I think a good rule in this kind of situation is “First, do no harm”. I’d rather ship an incomplete implementation that can be verified by eyeball, and that’s what I’ve done – I was able to extract a pretty good set of regression tests for most of the features from the language book.

If someone else cares enough, some really obsessive forensics on the documentation and its code examples might yield enough certainty about the author’s intentions to support a full reconstruction. Alas, we can’t ask him for help, as he died in 2012.

A lot of the value in this revival is putting the language documentation and a code chrestomathy in a form that’s easy to find and read, anyway. Artifacts like COMIT are interesting to study, but actually using it for anything would be perverse.

The dangerous folly of “Software as a Service”

Post Syndicated from Eric Raymond original http://esr.ibiblio.org/?p=8338

Comes the word that Saleforce.com has announced a ban on its customers selling “military-style rifles”.

The reason this ban has teeth is that the company provides “software as a service”; that is, the software you run is a client for servers that the provider owns and operates. If the provider decides it doesn’t want your business, you probably have no real recourse. OK, you could sue for tortious interference in business relationships, but that’s chancy and anyway you didn’t want to be in a lawsuit, you wanted to conduct your business.

This is why “software as a service” is dangerous folly, even worse than old-fashioned proprietary software at saddling you with a strategic business risk. You don’t own the software, the software owns you.

It’s 2019 and I feel like I shouldn’t have to restate the obvious, but if you want to keep control of your business the software you rely on needs to be open-source. All of it. All of it. And you can’t afford it to be tethered to a service provider even if the software itself is nominally open source.

Otherwise, how do you know some political fanatic isn’t going to decide your product is unclean and chop you off at the knees? It’s rifles today, it’ll be anything that can be tagged “hateful” tomorrow – and you won’t be at the table when the victim-studies majors are defining “hate”. Even if you think you’re their ally, you can’t count on escaping the next turn of the purity spiral.

And that’s disregarding all the more mundane risks that come from the fact that your vendor’s business objectives aren’t the same as yours. This is ground I covered twenty years ago, do I really have to put on the Mr. Famous Guy cape and do the rubber-chicken circuit again? Sigh…

Business leaders should to fear every piece of proprietary software and “service” as the dangerous addictions they are. If Salesforce.com’s arrogant diktat teaches that lesson, it will have been a service indeed.

Contributor agreements considered harmful

Post Syndicated from Eric Raymond original http://esr.ibiblio.org/?p=8287

Yesterday I got email from a project asking me to wear my tribal-elder hat, looking for advice on how to re-invent its governance structure. I’m not going to name the project because they haven’t given me permission to air their problems in public, but I need to write about something that came up during the discussion, when my querent said they were thinking about requiring a contributor release form from people submitting code, “the way Apache does”.

“Don’t do it!” I said. Please don’t go the release-form route. It’s bad for the open-source community’s future every time someone does that. In the rest of this post I’ll explain why.

Every time a project says “we need you to sign a release before we’ll take your code”, it helps create a presumption that such releases are necessary – as opposed to the opposite theory, which is that the act of donating code to an open-source project constitutes in itself a voluntary cession of the project’s right to use it under terms implied by the open-source license of the project.

Obviously one of those theories is better for open source – no prize for guessing which.

Here is the language NTPsec uses in its hacking guide:

By submitting patches to this project, you agree to allow them to be redistributed under the project’s license according to the normal forms and usages of the open-source community.

There is as much legal ground for the cession theory of contribution as there is for any idea that contributor releases are required for some nebulous kind of legal safety. There’s no governing statute and no case law on this; no dispute over an attempt to revoke a contribution has yet been adjudicated.

And here’s the ironic part: if it ever comes to a court case, one of the first things the judge is going to look at is community expectations and practice around our licenses. A jurist is supposed to do this in contract and license cases; there’s some famous case law about the interpretation of handshake contracts among Hasidic Jewish diamond merchants in New York City that makes this very clear and explicit. Where there is doubt about interpretation and no overriding problem of of equity, the norms of the community within which the license/contract was arrived at should govern.

So, if the judge thinks that we expect contribution permissions to fail closed unless explicitly granted, he/she is more likely to make that happen. On the other hand, if he/she thinks that community norms treat contribution as an implied cession of certain rights in exchange for the benefits of participating in the project, that is almost certainly how the ruling will come out.

I say, therefore, that Apache and the FSF and the Golang project and everybody else requiring contributor releases are wrong. Because there is no governing law on the effect of these release forms, they are not actually protection against any risk, just a sort of ritual fundament-covering that a case of first impression could toss out in a heartbeat. Furthermore, the way they’ve gone wrong is dangerous; this ritual fundament-covering could someday bring about the very harm it was intended to prevent.

If your project has a contributor release, do our whole community a favor and scrap it. Any lawyer who tells you such a thing is necessary is talking out his ass – he doesn’t know that, and at the present state of the law he can’t know it.

(My wife Cathy, the attorney, concurs. So this advice isn’t just a layperson vaporing in ignorance.)

Instead, post a contract of adhesion on your website or in your guide for contributors. Use my language, or edit to taste. The one thing you should be sure stays in is some language equivalent to this part: “according to the normal forms and usages of the open-source community”.

That is important because, if it ever comes to a court case, we want to be able to point the judge at that as a clue that there are normal forms and usages and he/she can do what he’s supposed to and almost certainly wants to do by understanding them and applying them.