Tag Archives: General

4 Important Considerations To Avoid Wasted Cloud Spend

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/11/4-important-considerations-to-avoid-wasted-cloud-spend/

Growth for cloud services is at an all-time high in 2020, partly due to the COVID-19 pandemic and businesses scrambling to migrate to the cloud as soon as possible. But with that record growth, wasted spend on unnecessary or unoptimised cloud usage is also at an all-time high.


Wasted cloud spend generally boils down to paying for services or resources that you aren’t using. You can most commonly attribute wasted spend on services that aren’t being used at all in either the development or production stages, services that are often idle (not being used 80-90% of the time), or simply over-provisioned resources (more resources than necessary).


Wasted cloud spend is expected to reach as high as $17.6 billion in 2020. In a 2019 report from Flexera, they measured the actual waste of cloud spending at 35 percent of all cloud services revenue. This highlights how crucial it can be, and how much money a business can save, by having an experienced and dedicated AWS management team looking after their cloud services. In many cases, having the right team managing your cloud services can more than repay any associated management costs. Read on below for some further insight into the most common pitfalls of wasted cloud spending.

Lack Of Research, Skills and/or Management

A lack of proper research, skills or management involved in a migration to cloud services is probably the most frequent and costly pitfall. Without proper AWS cloud migration best practices and a comprehensive strategy in place, businesses may dive into setting up their services without realising how complex the initial learning curve can be to sufficiently manage their cloud spend. It’s a common occurrence for not just businesses, but anyone first experimenting with cloud, to see a bill that’s much higher than they first anticipated. This can lead a business to believe the cloud is significantly more expensive than it really needs to be.


It’s absolutely crucial to have a strategy in place for all potential usage situations, so that you don’t end up paying much more than you should. This is something that a managed cloud provider can expertly design for you, to ensure that you’re only paying for exactly what you need and potentially quite drastically reducing your spend over time.

Unused Or Unnecessary Snapshots

Snapshots can create a point in time backup of your AWS services. Each snapshot contains all of the information that is needed to restore your data to the point when the snapshot was taken. This is an incredibly important and useful tool when managed correctly. However it’s also one of the biggest mistakes businesses can make in their AWS cloud spend.


Charges for snapshots are based on the amount of data stored, and each snapshot increases the amount of data that you’re storing. Many users will take and store a high number of snapshots and never delete them when they’re no longer needed, and in a lot of cases, not realise that this is exponentially increasing their cloud spend.

Idle Resources

Idle resources account for another of the largest parts of cloud waste. Idle resources are resources that aren’t being used for anything, yet you’re still paying for them. They can be useful in the event of resource spike, but for the most part may not be worth you paying for them when you look at your average usage over a period of time. A good analogy for this would be paying rent for a holiday home all year round, when you only spend 2 weeks using it every Christmas. This is where horizontal scaling comes into play. When set up by skilled AWS experts, horizontal scaling can turn services and resources on or off depending on when they are actually needed.

Over-Provisioned Services

This particular issue somewhat ties into idle resources, as seen above. Over-provisioned services refers to paying for entire instances that are not in use whatsoever, or very minimally. This could be an Amazon RDS service for a database that’s not in use, an Amazon EC2 instance that’s idle 100% of the time, or any number of other services. It’s important to have a cloud strategy in place that involves frequently auditing what services your business is using and not using, in order to minimise your cloud spend as much as possible.


As you can see from the statistics provided by Flexera above, wasted cloud spend is one of the most significant problems facing businesses that have migrated to the cloud. But with the right team of experts in place, wasted spend can easily be avoided, and even mitigate management costs, leaving you in a far better position in terms of both service performance, reliability and support, and overall costs.

The post 4 Important Considerations To Avoid Wasted Cloud Spend appeared first on AWS Managed Services by Anchor.

Our Top 4 Favorite Google Chrome DevTools Tips & Tricks

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/our-top-4-favorite-google-chrome-devtools-tips-tricks/

Welcome to the final installation of our 3-part series on Google Chrome’s DevTools. In part 1 and part 2, we explored an introduction to using DevTools, as well as how you can use it to diagnose SSL and security issues on your site. In the third and final part of our DevTools series, we will be sharing our 4 favourite useful tips and tricks to help you achieve a variety of useful and efficient tasks with DevTools, without ever leaving your website!

Clearing All Site Data

Perhaps one of the most frustrating things when building your website, is the occasional menace that is browser caching. If you’ve put a lot of time into building websites, you probably know the feeling of making a change and then wondering why your site still shows the old page after you refresh. But even further to that, there can be any number of other reasons why you may need to clear all of your site data. Commonly, one might be inclined to just flush all cookies and cache settings in their browser, wiping their history from every website. This can lead to being suddenly logged out of all of your usual haunts – a frustrating inconvenience if you’re ever in a hurry to get something done.

Thankfully, DevTools has a handy little tool that allows you to clear all data related to the site that you’re currently on, without wiping anything else.

  1. Open up DevTools
  2. Click on the “Application” tab. If you can’t see it, just increase the width of your DevTools or click the arrow to view all available tabs.
  3. Click on the “Clear storage” tab under the “Application” heading.
  4. You will see how much local disk usage that specific website is taking up on your computer. To clear it all, just click on the “Clear site data” button.

That’s it! Your cache, cookies, and all other associated data for that website will be wiped out, without losing cached data for any other website.

Testing Device Responsiveness

In today’s world of websites, mobile devices make up more than half of all traffic to websites. That means that it’s more important than ever to ensure your website is fully responsive and looking sharp across all devices, not just desktop computers. Chrome DevTools has an incredibly useful tool to allow you to view your website as if you were viewing it on a mobile device.

  1. Open up DevTools.
  2. Click the “Toggle device toolbar” button on the top left corner. It looks like a tablet and mobile device. Alternatively, press Ctrl + Shift + M on Windows.
  3. At the top of your screen you should now see a dropdown of available devices to pick from, such as iPhone X. Selecting a device will adjust your screen’s ratio to that of the selected device.

Much easier than sneaking down to the Apple store to test out your site on every model of iPhone or iPad, right?

Viewing Console Errors

Sometimes you may experience an error on your site, and not know where to look for more information. This is where DevTools’ Console tab can come in very handy. If you experience any form of error on your site, you can likely follow these steps to find a lead on what to do to solve it:

  1. Open up DevTools
  2. Select the “Console” tab
  3. If your console logged any errors, you can find them here. You may see a 403 error, or a 500 error, etc. The console will generally log extra information too.

If you follow the above steps and you see a 403 error, you then know your action was not completed due to a permissions issue – which can get you started on the right track to troubleshooting the issue. Whatever the error(s) may be, there is usually a plethora of information available on potential solutions by individually researching those error codes or phrases on Google or your search engine of choice.

Edit Any Text On The Page

While you can right-click on text and choose “inspect element”, and then modify text that way, this alternative method allows you to modify any text on a website as if you were editing a regular document or a photoshop file, etc.

  1. Open up DevTools
  2. Go to the “Console” tab
  3. Copy and paste the following into the console and hit enter:
    1. document.designMode=”on”

Once that’s done, you can select any text on your page and edit it. This is actually one of the more fun DevTools features, and it can make testing text changes on your site an absolute breeze.


This concludes our entire DevTools series! We hope you’ve enjoyed it and maybe picked up a few new tools along the way. Our series only really scratches the surface of what DevTools can be used for, but we hope this has offered a useful introduction to some of the types of things you can accomplish. If you want to keep learning, be sure to head over to Google’s Chrome DevTools documentation for so much more!

The post Our Top 4 Favorite Google Chrome DevTools Tips & Tricks appeared first on AWS Managed Services by Anchor.

Diagnosing Security Issues with Google Chrome’s DevTools

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/diagnosing-security-issues-with-google-chromes-devtools/

Welcome to part 2 of our 3 part series on delving into some of the most useful features and functions of Google Chrome’s DevTools. In part 1, we went over a brief introduction of DevTools, plus some minor customisations. In this part 2, we’ll be taking a look into the security panel section of DevTools, including some of the different things you can look at when diagnosing a website or application for security and SSL issues.

The Security Panel

One of Chrome’s most helpful features has to be the security panel. To begin, visit any website through Google Chrome and open up DevTools, then select “Security” from the list of tabs at the top. If you can’t see it, you may need to click the two arrows to display more options or increase the width of DevTools.

Inspecting Your SSL Certificate

When we talk about security on websites, one of the first things that we usually would consider is the presence of an SSL certificate. The security tab allows us to inspect the website’s SSL certificate, which can have many practical uses. For example, when you visit your website, you may see a concerning red “Unsafe” warning. If you suspect that that may be something to do with your SSL certificate, it’s very likely that you’re correct. The problem is, the issue with your SSL certificate could be any number of things. It may be expired, revoked, or maybe no SSL certificate exists at all. This is where DevTools can come in handy. With the Security tab open, go ahead and click “View certificate” to inspect your SSL certificate. In doing so, you will be able to see what domain the SSL has been issued to, what certificate authority it was issued by, and its expiration date – among various other details, such as viewing the full certification path.

For insecure or SSL warnings, viewing your SSL certificate is the perfect first step in the troubleshooting process.

Diagnosing Mixed Content

Sometimes your website may show as insecure, and not have a green padlock in your address bar. You may have checked your SSL certificate is valid using the method above, and everything is all well and good there, but your site is still not displaying a padlock. This can be due to what’s called mixed content. Put simply; mixed content means that your website itself is configured to load over HTTPS://, but some resources (scripts, images, etc) on your website are set to HTTP://. For a website to show as fully secure, all resources must be served over HTTPS://, and your website’s URL must also be configured to load as HTTPS://.

Any resources that are not loading securely are vulnerable to man-in-the-middle attacks, whereby a malicious actor can intercept data sent through your website, potentially leaking private information. This is doubly important for eCommerce sites or any sites handling personal information, and why it’s so important to ensure that your website is fully secure, not to mention increasing users’ trust in your website.

To assist in diagnosing mixed content, head back into the security tab again. Once you have that open, go ahead and refresh the website that you’re diagnosing. If there are any non-secure resources on the page, the panel on the left-hand side will list them. Secure resources will be green, and those non-secure will be red. Oftentimes this can be one or two images with an HTTP:// URL. Whatever the case, this is one of the easiest ways to diagnose what’s preventing your site from gaining a green padlock. Once you have a list of which content is insecure, you can go ahead and manually adjust those issues on your website.

There are always sites like “Why No Padlock?” that effectively do the same thing as the steps listed above, but the beauty of DevTools is that it is one tool that can do it all for you, without having to leave your website.


This concludes part 2 of our 3-part DevTools series! As always, be sure to head over to Google’s Chrome DevTools documentation for further information on everything discussed here.

We hope that this has helped you gain some insight into how you might practically use DevTools when troubleshooting security and SSL issues on your own site. Now that you’re familiar with the basics of the security panel stay tuned for part 3 where we will get stuck into some of the most useful DevTools tips and tricks of all.

The post Diagnosing Security Issues with Google Chrome’s DevTools appeared first on AWS Managed Services by Anchor.

An Introduction To Getting Started with Google Chrome’s DevTools

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/an-introduction-to-getting-started-with-google-chromes-devtools/

Whether you’re a cloud administrator or developer, having a strong arsenal of dev tools under your belt will help to make your everyday tasks and website or application maintenance a lot more efficient.

One of the tools our developers use every day to assist our clients is Chrome’s Devtools. Whether you work on websites or applications for your own clients, or you manage your own company’s assets, Devtools is definitely worth spending the time to get to know. From style and design to troubleshooting technical issues, you would be hard-pressed to find such an effective tool for both.

Whether you already use Chrome’s DevTools on a daily basis, or you’re yet to discover its power and functionality, we hope we can show you something new in our 3-part DevTools series! In part 1 of this series, we will be giving you a brief introduction to DevTools. In part 2, we will cover diagnosing security issues using DevTools. Finally, in part 3, we’ll go over some of the more useful tips and tricks that you can use to enhance your workflow.

While in this series, we will be using Chrome’s DevTools, most of this advice also applies to other popular browser’s developer tools, such as Microsoft Edge or Mozilla Firefox. Although the functionality and location of the tools will differ, doing a quick Google search should help you to dig up anything you’re after.

An Introduction to Chrome DevTools

Chrome DevTools, also known as Chrome Developer tools, is a set of tools built into the Chrome browser to assist web/application developers and novice users alike. Some of the things it can be used for includes, but is not limited to:

  • Debugging and troubleshooting errors
  • Editing on the fly
  • Adjusting/testing styling (CSS) before making live changes
  • Emulating different network speeds (like 3G) to determine load times on slower networks
  • Testing responsiveness across different devices
  • Performance auditing to ensure your website or application is fast and well optimised

All of the above features can greatly enhance productivity when you’re building or editing, whether you’re a professional developer or a hobbyist looking to build your first site or application.

Chrome DevTools has been around for a long time (since Chrome’s initial release), but it’s a tool that has been continuously worked on and improved since its beginnings. It is now extremely feature-rich, and still being improved every day. Keep in mind; the above features are only a very brief overview of all of the functionality DevTools has to offer. In this series, we’ll get you comfortably acquainted with DevTools, but you can additionally find very in-depth documentation over at Google’s DevTools site here, where they provide breakdowns of every feature.

How to open Chrome DevTools

There are a few different ways that you can access DevTools in your Chrome browser.

  1. Open DevTools by right-clicking on anything within the browser page, and select the “Inspect” button. This will open DevTools and jump to the specific element that you selected.
  2. Another method is via the Chrome browser menu. Simply go to the top right corner and click the three dots > More tools > Developer tools.
  3. If you prefer hotkeys, you can open DevTools by doing either of the following, depending on your operating system:

Windows = F12 or Ctrl + shift + I

Mac = Cmd + Opt + I

Customising your Environment

Now that you know what DevTools is and how to open it, it’s worth spending a little bit of time customising DevTools to your own personal preferences.

To begin with, DevTools has a built-in dark mode. When you’re looking at code or a lot of small text all the time, using a dark theme can greatly help to reduce eye strain. Enabling dark mode can be done by following the instructions below:

  1. Open up DevTools using your preferred method above
  2. Once you’re in, click the settings cog on the top right to open up the DevTools settings panel
  3. Under the ‘Appearance’ heading, adjust the ‘Theme’ to ‘Dark’

You may wish to spend some time exploring the remainder of the preferences section of DevTools, as there are a lot of layout and functionality customisations available.


This concludes part 1 of our 3 part DevTools series. We hope that this has been a useful and informative introduction to getting started using DevTools for your own project! Now that you’re familiar with the basics stay tuned for part 2 where we will show you how you can diagnose basic security issues – and more!

The post An Introduction To Getting Started with Google Chrome’s DevTools appeared first on AWS Managed Services by Anchor.

My SubscribeStar account is live

Post Syndicated from esr original http://esr.ibiblio.org/?p=8758

I’ve finally gotten validated by SubscribeStar, which means I can get payouts through it, which means those of you who want nothing to do with Patreon can contribute through it: https://www.subscribestar.com/esr

If you’re not contributing, and you’re a regular here, please chip in. While I’ve had some research grants in the past, right now nobody is subsidizing the infrastructure work I do and I’m burning my savings.

There’s NTPsec, of course – secure time synchronization is critical to the Internet. There are the 40 or so other projects I maintain. Recently I’m working on improving the tools ecosystem for the Go language, because to reduce defect rates we have got to lift infrastructure like NTPsec out of C and Go looks like the most plausible path. Right now I’m modifying Flex so it can be be retargetable to emit Go (and other languages); I expect to do Go support for Bison next.

I’m not the only person deserving of your support – I’ve founded the Loadsharers network for people who want to help with the more general infrastructure-support problem – but if you’re a regular on this blog I hope I’m personally relatively high on your priority list. I’m not utterly broke yet, but the prospect is looming over me and my house needs a new roof.

Even $5 a month is helpful if enough of you match it. For $20 a month you get to be credited as a supporter when I ship a release of one of my personal projects. From $50 a month I can buy my long-suffering wife a nice dinner and afford an occasional trip to the shooting range.

I live a simple life, trying to be of service to my civilization. Please help that continue.

How not to treat a customer

Post Syndicated from esr original http://esr.ibiblio.org/?p=8730

First, my complaint to Simply NUC about the recent comedy of errors around my attempt to order a replacement fan for Cathy’s NUC. Sorry, I was not able to beat WordPress’s new editor into displaying URLs literally, and I have no idea why the last one turns into a Kindle link.


Subject: An unfortunate series of events

Simply NUC claims to be a one-stop shop for NUC needs and a customer-centric company. I would very much like to do business with an outfit that lives up to Simply NUC’s claims for itself. This email is how about I observed it to fail on both levels.

A little over a week ago my wife’s NUC – which is her desktop
machine, having replaced a conventional tower system in 2018 –
developed a serious case of bearing whine. Since 1981 I have
built, tinkered with, and deployed more PCs than I can remember,
so I knew this probably meant the NUC’s fan bearings were becoming
worn and could pack up at any moment.

Shipping the machine out for service was unappealing, partly for cost
reasons but mostly because my wife does paying work on it and can’t
afford to have it out of service for an unpredictable amount of time.
So I went shopping for a replacement fan.

The search “NUC fan replacement” took me here:

NUC Replacement Fans

There was a sentence that said “As of right now SimplyNUC offers
replacement fans for all NUC models.” Chasing the embedded link
landed me on the Simply NUC site here:

Nuc Accessories

Now bear in mind that I had not disassembled my wife’s NUC yet,
that I had landed from a link that said “replacement fans for all
NUC models”, and that I didn’t know different NUCs used different
fan sizes.

The first problem I had was that this page did nothing to even hint
that the one fan pictured might not be a universal fit. Dominic has
told me over the phone that “Dawson” is a NUC type, and if I had known
that I might have interpreted the caption as “fit only for Dawsons”.
But I didn’t, and the caption “Dawson BAPA0508R5U fan” looks exactly
as though “Dawson” is the *fan vendor*.

So I placed the order, muttering to myself because there aren’t any
shipping options less expensive than FedEx.

A properly informative page would have labeled the fan with its product code and had text below that said “Compatible with Dawson Canyon NUCs.” That way, customers landing there could get a clue that the BAPA0508R5U is not a universal replacement for all NUC fans.

A page in conformance with Simply NUC’s stated mission to be a
one-stop NUC shop would also carry purchase links to other fans fitted
for different model ranges, like the Delta BSC0805HA I found out later
is required for my wife’s NUC8i3BEH1.

The uninformative website page was strike one.

In the event, when the fan arrived, I disassembled my wife’s
NUC and instantly discovered that (a) it wasn’t even remotely the right
size, and (b) it didn’t even match the fan in the website picture! What I
shipped was not a BAPA0508R5U, it’s a BAAA0508RSH.

Not getting the product I ordered was strike two.

I got on the Simply NUC website’s Zendesk chat and talked with a
person named Bobbie who seemed to want to be helpful (I point this out
because, until I spoke with Dominic, this was the one single occasion
on which Simply NUC behaved like it might be run by competent
people). I ended up emailing her a side-by-side photo of the two fans.
It’s attached.

Bobbie handed me off to one Sean McClure, and that is when my
experience turned from bad to terrible. If I were a small-minded
person I would be suggesting that you fire Mr. McClure. But I’m
not; I think the actual fault here is that nobody has ever explained
to this man what his actual job is, nor trained him properly in
how to do it.

And that is his *management’s* fault. Somebody – possibly one
of the addressees of this note – failed him.

Back during the dot-com boom I was on the board of directors of
a Silly Valley startup that sold PCs to run Linux, competing
directly with Sun Microsystems. So I *do* in fact know what Sean
McClure’s job is. It’s to *retain customers*. It’s to not alienate
possible future revenue streams.

When a properly trained support representative reads a story like
mine, the first words he types ought to be something equivalent to
“I’m terribly sorry, we clearly screwed up, let me set up an RMA for
that.” Then we could discuss how Simply NUC can serve my actual

That is how you recruit a loyal customer who will do repeat business
and recommend you to his peers. That is how you live up to the
language on the “About” page of your website.

Here’s what happened instead:

Unfortunately we don’t keep those fans in stock. You can try reaching
out to
Intel directly to see if they have a replacement or if they will need to
your device. You can submit warranty requests to:
supporttickets.intel.com, a
login will need to be created in order to submit the warranty request.
Fans can
also be sourced online but will require personal research.

This is not an answer, it’s a defensive crouch that says “We don’t
care, and we don’t want your future business”. Let me enumerate the
ways it is wrong, in case you two are so close to the problem that you
don’t see it.

1. 99% odds that a customer with a specific requirement for a
replacement part is calling you because he does *not* want to RMA the
entire device and have it out of service for an unpredictable amount
of time. A support tech that doesn’t understand this has not been
taught to identify with a customer in distress.

2. A support tech that understands his real job – customer retention – will move heaven and earth rather than refer the customer to a competing vendor. Even if the order was only for a $15 fan, because the customer might be experimenting to see if the company is a competent outfit to handle bigger orders. As I was; you were never going to get 1,000 orders for whole NUCs from me but more than one was certainly possible. And I have a lot of friends.

3. “Personal research”? That’s the phrase that really made me
angry. If it’s not Simply NUC’s job to know how to source parts for
NUCs, so that I the customer don’t have to know that, what *is* the
company’s value proposition?

Matters were not improved when I discovered that typing BAPA0508R5U
into a search engine instanntly turned up several sources for the fan
I need, including this Amazon page:

A support tech who understood his actual job would have done that
search the instant he had IDed the fan from the image I sent him, and
replied approximately like this: “We don’t currently stock that fan;
I’ll ask our product guys to fix this and it should show on our Fans
page in <a reasonable period>. In the meantime, I found it on Amazon;
here’s the link.”

As it is, “personal research” was strike three.

Oh, and my return query about whether I could get a refund wasn’t even
It wasn’t even answered.

My first reaction to this sequence of blunders was to leave a scathingly bad review of Simply NUC on TrustPilot. My second reaction was to think that, in fairness, the company deserves a full account of the blunders directed at somebody with the authority to fix what is broken.

Your move.


Here’s the reply I got:


Mr. Raymond, while I always welcome customer feedback and analyze it for
opportunities to improve our operations, I will not entertain customers who
verbally berate, belittle, or otherwise use profanity directed at my
employees or our company. That is a more important core value of our
company than the pursuit of revenue of any size.

I’ve instructed Sean to cancel the return shipping label as we’ve used
enough of each other’s time in this transaction. You may retain the blower
if it can be of any use to you or one of your friends in the future, or
dispose of it in an environmentally friendly manner.

I will request a refund to your credit card for the $15 price of the
product ASAP.

I don’t any of this needs further elaboration on my part, but I note that Simply NUC has since modified its fans page to be a bit more informative.

A user story about user stories

Post Syndicated from esr original http://esr.ibiblio.org/?p=8720

The way I learned to use the term “user story”, back in the late 1990s at the beginnings of what is now called “agile programming”, was to describe a kind of roleplaying exercise in which you imagine a person and the person’s use case as a way of getting an outside perspective on the design, the documentation, and especially the UI of something you’re writing.

For example:

Meet Joe. He works for Randomcorp, who has a nasty huge old Subversion repository they want him to convert to Git. Joe is a recent grad who got thrown at the problem because he’s new on the job and his manager figures this is a good performance test in a place where the damage will be easily contained if he screws up. Joe himself doesn’t know this, but his teammates have figured it out.

Joe is smart and ambitious but has little experience with large projects yet. He knows there’s an open-source culture out there, but isn’t part of it – he’s thought about running Linux at home because the more senior geeks around him all seem to do that, but hasn’t found a good specific reason to jump yet. In truth most of what he does with his home machine is play games. He likes “Elite: Dangerous” and the Bioshock series.

Joe knows Git pretty well, mainly through the Tortoise GUI under Windows; he learned it in school. He has only used Subversion just enough to know basic commands. He found reposurgeon by doing web searches. Joe is fairly sure reposurgeon can do the job he needs and has told his boss this, but he has no idea where to start.

What does Joe’s discovery process looks like? Read the first two chapters of “Repository Editing with Reposurgeon” using Joe’s eyes. Is he going to hit this wall of text and bounce? If so, what could be done to make it more accessible? Is there some way to write a FAQ that would help him? If so, can we start listing the questions in the FAQ?

Joe has used gdb a little as part of a class assignment but has not otherwise seen programs with a CLI resembling reposurgeon’s. When he runs it, what is he likely to try to do first to get oriented? Is that going to help him feel like he knows what’s going on, or confuse him?

“Repository Editing…” says he ought to use repotool to set up a Makefile and stub scripts for the standard conversion workflow. What will Joe’s eyes tell him when he looks at the generated Makefile? What parts are likeliest to confuse him? What could be done to fix that?

Joe, my fictional character, is about as little like me as as is plausible at a programming shop in 2020, and that’s the point. If I ask abstractly “What can I do to improve reposurgeon’s UI?”, it is likely I will just end up spinning my wheels; if, instead, I ask “What does Joe see when he looks at this?” I am more likely to get a useful answer.

It works even better if, even having learned what you can from your imaginary Joe, you make up other characters that are different from you and as different from each other as possible. For example, meet Jane the system administrator, who got stuck with the conversion job because her boss thinks of version-control systems as an administrative detail and doesn’t want to spend programmer time on it. What do her eyes see?

In fact, the technique is so powerful that I got an idea while writing this example. Maybe in reposurgeon’s interactive mode it should issue a first like that says “Interactive help is available; type ‘help’ for a topic menu.”

However. If you search the web for “design by user story”, what you are likely to find doesn’t resemble my previous description at all. Mostly, now twenty years after the beginnings of “agile programming”, you’ll see formulaic stuff equating “user story” with a one-sentence soundbite of the form “As an X, I want to do Y”. This will be surrounded by a lot of talk about processes and scrum masters and scribbling things on index cards.

There is so much gone wrong with this it is hard to even know where to begin. Let’s start with the fact that one of the original agile slogans was “Individuals and Interactions Over Processes and Tools”. That slogan could be read in a number of different ways, but under none of them at all does it make sense to abandon a method for extended insight into the reactions of your likely users for a one-sentence parody of the method that is surrounded and hemmed in by bureaucratic process-gabble.

This is embedded in a larger story about how “agile” went wrong. The composers of the Agile Manifesto intended it to be a liberating force, a more humane and effective way to organize software development work that would connect developers to their users to the benefit of both. A few of the ideas that came out of it were positive and important – besides design by user story, test-centric development and refactoring leap to mind,

Sad to say, though, the way “user stories” became trivialized in most versions of agile is all too representative of what it has often become under the influence of two corrupting forces. One is fad-chasers looking to make a buck on it, selling it like snake oil to managers forever perplexed by low productivity, high defect rates, and inability to make deadlines. Another is the managers’ own willingness to sacrifice productivity gains for the illusion of process control.

It may be too late to save “agile” in general from becoming a deadening parody of what it was originally intended to be, but it’s not too late to save design by user story. To do this, we need to bear down on some points that its inventors and popularizers were never publicly clear about, possibly because they themselves didn’t entirely understand what they had found.

Point one is how and why it works. Design by user story is a trick you play on your social-monkey brain that uses its fondness for narrative and characters to get you to step out of your own shoes.

Yes, sure, there’s a philosophical argument that stepping out of your shoes in this sense is impossible; Joe, being your fiction, is limited by what you can imagine. Nevertheless, this brain hack actually works. Eppure, si muove; you can generate insights with it that you wouldn’t have had otherwise.

Point two is that design by user story works regardless of the rest of your methodology. You don’t have to buy any of the assumptions or jargon or processes that usually fly in formation with it to get use out of it.

Point three is that design by user story is not a technique for generating code, it’ s a technique for changing your mind. If you approach it in an overly narrow and instrumental way, you won’t imagine apparently irrelevant details like what kinds of video games Joe likes. But you should do that sort of thing; the brain hack works in exact proportion to how much imaginative life you give your characters.

(Which in particular, is why “As an X, I want to do Y” is such a sadly reductive parody. This formula is designed to stereotype the process, but stereotyping is the enemy of novelty, and novelty is exactly what you want to generate.)

A few of my readers might have the right kind of experience for this to sound familiar. The mental process is similar to what in theater and cinema is called “method acting.” The goal is also similar – to generate situational responses that are outside your normal habits.

Once again: you have to get past tools and practices to discover that the important part of software design – the most difficult and worthwhile part – is mindset. In this case, and temporarily, someone else’s.

Rules for rioters

Post Syndicated from esr original http://esr.ibiblio.org/?p=8708

I had business outside today. I need to go in towards Philly, closer to the riots, to get a new PSU put into the Great Beast. I went armed; I’ve been carrying at all times awake since Philadelphia started to burn and there were occasional reports of looters heading into the suburbs in other cities.

I knew I might be heading into civil unrest today. It didn’t happen. But it still could.

Therefore I’m announcing my rules of engagement should any of the riots connected with the atrocious murder of George Floyd reach the vicinity of my person.

  1. I will shoot any person engaging in arson or other life-threatening behavior, issuing a warning to cease first if safety permits.
  2. Blacks and other minorities are otherwise safe from my gun; they have a legitimate grievance in the matter of this murder, and what they’re doing to their own neighborhoods and lives will be punishment enough for the utter folly of their means of expression once the dust settles.
  3. White rioters, on the other hand, will be presumed to be Antifa Communists attempting to manipulate this tragedy for Communist political ends; them I consider “enemies-general of all mankind, to be dealt with as wolves are” and will shoot immediately, without mercy or warning.

Designing tasteful CLIs: a case study

Post Syndicated from esr original http://esr.ibiblio.org/?p=8697

Yesterday evening my apprentice, Ian Bruene, tossed a design question at me.

Ian is working on a utility he calls “igor” intended to script interactions with GitLab, a major public forge site. Like many such sites, it has a sort of remote-procedure-call interface that allows you, as an alternative to clicky-dancing on the visible Web interface, to pass it JSON datagrams and get back responses that do useful things like – for example – publishing a release tarball of a project where GitLab users can easily find it.

Igor is going to have (actually, already has) one mode that looks like a command interpreter for a little minilanguage, with each command being an action verb like “upload” or “release”. The idea is not so much for users to drive this manually as for them to be able to write scripts in the minilanguage which become part of a project’s canned release procedure. (This is why GUIs are irrelevant to this whole discussion; you can’t script a GUI.)

Ian, quite reasonably, also wants users to be able to run simple igor commands in a fire-and-forget mode by typing “igor” followed by command-line arguments. Now, classically, under Unix, you would expect a single-line “release” command to be designed to look something like this:

$ igor -r -n fooproject -t 1.2.3 foo-1.2.3.tgz

(To be clear, the dollar sign on the left is a shell prompt, put in to emphasize that this is something you type direct to a shell.)

In this invocation, the “-r” option says “I want to do a release”, the -n option says “This is the GitLab name of the project I’m shipping a release of”, the -t option specifies a release tag, and the following filename argument is the name of the tarball you want to publish.

It might not look exactly like this. Maybe there’d be yet another switch that lets you attach a release notes file. Maybe you’d have the utility deduce the project name from the directory it’s running in. But the basic style of this CLI (= Command Line Interface), with option flags like -r that act as command verbs and other flags that exist to attach their arguments to the request, is very familiar to any Unix user. This what most Unix system commands look like.

One of the design rules of the old-school style is that the first token on the line that is not a switch argument terminates recognition of switches. It, and all tokens after it, are treated as arguments to be passed to the program and are normally expected to be filenames (or, in the 21st century, filename-like things like URLs).

Another characteristic of this style is that the order of the switch clauses is not fixed. You could write

$ igor -t 1.2.3 -n fooproject -r foo-1.2.3.tgz

and it would mean the same thing. (Order of the following arguments, on the other hand, will usually be significant if there is more than one.)

For purposes of this post I’m going to call this style old-school UNIX CLI, because Ian’s puzzlement comes from a collision he’s having with a newer style of doing things. And, actually, with a third interface style, also ancient but still vigorous.

When those of us in Unix-land only had the old-school CLI style as a model it was difficult to realize that all of those switches, though compact and easy to type, imposed a relatively high cognitive load. They were, and still are, difficult to remember. But we couldn’t really notice this until we had something to contrast it with that met similar functional requirements with lower cognitive effort.

Though there may have been earlier precedents, the first well-known program to use something recognizably like what I will call new-school CLI was the CVS version control system. The distinguishing trope was this: Each CVS command begins with a subcommand verb, like “cvs update” or “cvs checkout”. If there are switches, they normally follow the subcommand rather than preceding it. And there are fewer switches.

Later version-control systems like Subversion and Mercurial picked up on the subcommand idea and used it to further reduce the number of arbitrary-looking switches users had to remember. In Subversion, especially, your normal workflow could consist of a sequence of svn add, svn update, svn status, and svn commit commands during which you’d never type anything that looked like an old-school Unixy switch at all. This was easy to remember, easy to document, and users liked it.

Users liked it because humans are used to remembering associations between actions and natural-language verbs; “release” is less of a memory load than “-r” even if it takes longer to type. Which illuminates one of the drivers of the old-school style; it was shaped back in the 1970s by 110-baud Teletypes on which terseness and only having to type few characters was a powerful virtue.

After Subversion and Mercurial Git came along, with its CLI written in a style that, though it uses leading subcommand verbs, is rather more switch-heavy. From the point of view of being comfortable for users (especially new users), this was a pretty serious regression from Subversion. But then the CLI of git wasn’t really a design at all, it was an accretion of features that there was little attempt to simplify or systematize. It’s fair to say that git has succeeded despite its rather spiky UI rather than because of it.

Git is, however a digression here; I’ve mainly described it to make clear that you can lose the comfort benefits of the new-school CLI if a lot of old-school-style switches crowd in around the action verbs.

Next we need to look at a third UI style, which I’m going to call “GDB style” because the best-known program that uses it today is the GNU symbolic debugger. It’s almost as ancient as old-school CLIs, going back to the early 1980s at least.

A program like GDB is almost never invoked as a one-liner at all; a command is something you type to its internal command prompt, not the shell. As with new-school CLIs like Subversuon’s, all commands begin with an action verb, but there are no switches. Each space-separated token after the verb on the command line is passed to the command handler as a positional argument.

Part of Igor’s interface is intended to be a GDB-style interpreter. In that, the release command should logically look something like this, with igor’s command prompt at the left margin.

igor> release fooproject 1.2.3 foo-1.2.3.tgz

Note that this is the same arguments in the same order as our old-school “igor -r” command, but now -r has been replaced by a command verb and the order of what follows it is fixed. If we were designing Igor to be Subversion-like, with a fire-and-forget interface and no internal command interpreter at all, it would correspond to a shell command line like this:

$ igor release fooproject 1.2.3 foo-1.2.3.tgz

This is where we get to the collision of design styles I referred to earlier. What was really confusing Ian, I think, is that part of his experience was pulling for old-school fire-and-forget with switches, part of his experience was pulling for new-school as filtered through git’s rather botched version of it, and then there is this internal GDB-like interpreter to reconcile with how the command line works.

My apprentice’s confusion was completely reasonable. There’s a real question here which the tradition he’s immersed in has no canned, best-practices answer for. Git and GDB evade it in equal and opposite ways – Git by not having any internal interpreter like GDB, GDB by not being designed to do anything in a fire-and-forget mode without going through its internal interpreter.

The question is: how do you design a tool that (a) has a GDB like internal interpreter for a command minilanguage, (b) also allows you to write useful fire-and-forget one-liners in the shell without diving into that interpreter, (c) has syntax for those one liners that looks like an old-school CLI, and (d) has only one syntax for each command?

And the answer is: you can’t actually satisfy all four of those constraints at once. One of them has to give. It’s trivially true that if you abandon (a) or (b) you evade the problem, the way Git and GDB do. The real problem is that an old-school CLI wants to have terse switch clauses with flexible order, a GDB-style minilanguage wants to have more verbose commands with positional arguments, and never these twain shall meet.

The only one-syntax-for-each-command choice you can make is to have the same command interpreter parse your command line and what the user types to the internal prompt.

I bit this bullet when I designed reposurgeon, which is why a fire-and-forget command to read a stream dump of a Subversion repository and build a live repository from it looks like this:

$ reposurgeon "read <project .svn" "prefer git" "rebuild ../overthere"

Each of those string arguments is just fed to reposurgeon’s internal interpreter; any attempt to look like an old-school CLI has been abandoned. This way, I can fire and forget multiple reposurgeon commands; for Igor, it might be more appropriate to pass all the tokens on the command line as a single command.

The other possible way Igor could go is to have a command language for the internal interpreter in which each line looks like a new-school shell command with a command verb followed by switch clusters:

$ release -t 1.2.3 -n fooproject foo-1.2.3.tgz

Which is fine except that now we’ve violated some of the implicit rules of the GDB style. Those aren’t simple positional arguments, and we’re back to the higher cognitive load of having to remember cryptic switches.

But maybe that’s what your prospective users would be comfortable with, because it fits their established habits! This seems to me unlikely but possible.

Design questions like these generally come down to having some sense of who your audience is. Who are they? What do they want? What will surprise them the least? What will fit into their existing workflows and tools best?

I could launch into a panegyric on the agile-programming practice of design-by-user-story at this point; I think this is one of the things agile most clearly gets right. Instead, I’ll leave the reader with a recommendation to read up on that idea and learn how to do it right. Your users will be grateful.

Two graceful finishes

Post Syndicated from esr original http://esr.ibiblio.org/?p=8694

I’m having a rather odd feeling.

Reposurgeon. It’s…done; it’s a finished tool, fully fit for its intended purposes. After nine years of work and thinking, there’s nothing serious left on the to-do list. Nothing to do until someone files a bug or something in its environment changes, like someone writing an exporter/importer pair it doesn’t know about and should.

When you wrestle with a problem that is difficult and worthy for long enough, the problem becomes part of you. Having that go away is actually a bit disconcerting, like putting your foot on a step that’s not there. But it’s OK; there are lots of other interesting problems out there and I’m sure one will find me to replace reposurgeon’s place in my life.

I might try to write a synoptic look back on the project at some point.

Looking over some back blog posts on reposurgeon, I became aware that I never told my blog audience the last bit of the saga following my ankle surgery. That’s because there was no drama. The ankle is now fully healed and as solidly functional as though I never injured it at all – I’ve even stopped having residual aches in damp weather.

Evidently the internal cartilage healed up completely, which is far from a given with this sort of injury. My thanks to everyone who was supportive when I literally couldn’t walk.

Term of the day: builder gloves

Post Syndicated from esr original http://esr.ibiblio.org/?p=8688

Another in my continuing series of attempts to coin, or popularize, terms that software engineers don’t know they need yet. This one comes from my apprentice, Ian Bruene.

“Builder gloves” is the special knowledge possessed by the builder of a tool which allows the builder to use it without getting fingers burned.

Software that requires builder gloves to use is almost always faulty. There are rare exceptions to this rule, when the application area of the software is so arcane that the builder’s specialist knowledge is essential to driving it. But usually the way to bet is that if your code requires builder gloves it is half-baked, buggy, has a poorly designed UI or is poorly documented.

When you ship software that you know requires builder gloves, or someone else tells you that it seems to require builder gloves, it could ruin someone else’s day and reflect badly on you. But if you believe in releasing early and often, sometimes half-baked is going to happen. Here’s how to mitigate the problem.

1. Warn the users what’s buggy and unstable in your release notes and the rest of your documentation.

2. Document your assumptions where the user can see them,

3. Work harder at not being a terrible UI designer.

Becoming really good at software engineering requires that you care about the experience the user sees, not just the code you can see.

This is your final warning

Post Syndicated from esr original http://esr.ibiblio.org/?p=8685

Earlier today, armed demonstrators stormed the Michigan State House protesting the state’s stay-at-home order.

I’m not going to delve in to the specific politics around the stay-at-home order, or whether I think it’s a good idea or a bad one, because there is a more important point to be made here. Actually, two important points.

(1) Nobody got shot. These protesters were not out-of-control yahoos intent on violence. This was a carefully calibrated and very controlled demonstration.

(2) This is the American constitutional system working correctly and as designed by the Founders. When the patience of the people has been pushed past its limit by tyranny and usurpation, armed revolt is what is supposed to happen. The threat of popular armed revolt is an intentional and central part of our system of checks and balances.

We aren’t at that point yet, though. The Michigan legislators should consider that they have received a final warning before actual shooting. The protesters demonstrated and threatened just as George Washington, Thomas Jefferson, Patrick Henry, and other Founders expected and wanted citizens to demonstrate and threaten in like circumstances.

I am sure there will be calls from the usual suspects to tighten gun laws and arrest the protesters as domestic terrorists. All of which will miss the point. Nobody got shot – this was the last attempt, within the norms of the Constitutional system as designed, to avoid violence.

If the Michigan state government responds to this demonstration with repression or violence, citizens will have the right – indeed, they will have a Constitutional duty – to correct the arrogance of power via armed revolt.

This was your final warning, legislators. Choose wisely.

Lassie errors

Post Syndicated from esr original http://esr.ibiblio.org/?p=8674

I didn’t invent this term, but boosting the signal gives me a good excuse for a rant against its referent.

Lassie was a fictional dog. In all her literary, film, and TV adaptations the most recurring plot device was some character getting in trouble (in the print original, two brothers lost in a snowstorm; in popular false memory “Little Timmy fell in a well”, though this never actually happened in the movies or TV series) and Lassie running home to bark at other humans to get them to follow her to the rescue.

In software, “Lassie error” is a diagnostic message that barks “error” while being comprehensively unhelpful about what is actually going on. The term seems to have first surfaced on Twitter in early 2020; there is evidence in the thread of at least two independent inventions, and I would be unsurprised to learn of others.

In the Unix world, a particularly notorious Lassie error is what the ancient line-oriented Unix editor “ed” does on a command error. It says “?” and waits for another command – which is especially confusing since ed doesn’t have a command prompt. Ken Thompson had an almost unique excuse for extreme terseness, as ed was written in 1973 to run on a computer orders of magnitude less capable than the embedded processor in your keyboard.

Herewith the burden of my rant: You are not Ken Thompson, 1973 is a long time gone, and all the cost gradients around error reporting have changed. If you ever hear this term used about one of your error messages, you have screwed up. You should immediately apologize to the person who used it and correct your mistake.

Part of your responsibility as a software engineer, if you take your craft seriously, is to minimize the costs that your own mistakes or failures to anticipate exceptional conditions inflict on others. Users have enough friction costs when software works perfectly; when it fails, you are piling insult on that injury if your Lassie error leaves them without a clue about how to recover.

Really this term is unfair to Lassie, who as a dog didn’t have much of a vocabulary with which to convey nuances. You, as a human, have no such excuse. Every error message you write should contain a description of what went wrong in plain language, and – when error recovery is possible – contain actionable advice about how to recover.

This remains true when you are dealing with user errors. How you deal with (say) a user mistake in configuration-file syntax is part of the user interface of your program just as surely as the normally visible controls are. It is no less important to get that communication right; in fact, it may be more important – because a user encountering an error is a user in trouble that he needs help to get out of. When Little Timmy falls down a well you constructed and put in his path, your responsibility to say something helpful doesn’t lessen just because Timmy made the immediate mistake.

A design pattern I’ve seen used successfully is for immediate error messages to include both a one-line summary of the error and a cookie (like “E2317”) which can be used to look up a longer description including known causes of the problem and remedies. In a hypothetical example, the pair might look like this:

Out of memory during stream parsing (E1723)

E1723: Program ran out of memory while building the deserialized internal representation of a stream dump. Try lowering the value of GOGC to cause more frequent garbage collections, increasing the size if your swap partition, or moving to hardware with more RAM.

The key point here is that the user is not left in the lurch. The messages are not a meaningless bark-bark, but the beginning of a diagnosis and repair sequence.

If the thought of improving user experience in general leaves you unmoved, consider that the pain you prevent with an informative error message is rather likely to be your own, as you use your software months or years down the road or are required to answer pesky questions about it.

As with good comments in your code, it is perhaps most motivating to think of informative error messages as a form of anticipatory mercy towards your future self.

Payload, singleton, and stride lengths

Post Syndicated from esr original http://esr.ibiblio.org/?p=8663

Once again I’m inventing terms for useful distinctions that programmers need to make and sometimes get confused about because they lack precise language.

The motivation today is some issues that came up while I was trying to refactor some data representations to reduce reposurgeon’s working set. I realized that there are no fewer than three different things we can mean by the “length” of a structure in a language like C, Go, or Rust – and no terms to distinguish these senses.

Before reading these definitions, you might to do a quick read through The Lost Art of Structure Packing.

The first definition is payload length. That is the sum of the lengths of all the data fields in the structure.

The second is stride length. This is the length of the structure with any interior padding and with the trailing padding or dead space required when you have an array of them. This padding is forced by the fact that on most hardware, an instance of a structure normally needs to have the alignment of its widest member for fastest access. If you’re working in C, sizeof gives you back a stride length in bytes.

I derived the term “stride length” for individual structures from a well-established traditional use of “stride” for array programming in PL/1 and FORTRAN that is decades old.

Stride length and payload length coincide if the structure has no interior or trailing padding. This can sometimes happen when you get an arrangement of fields exactly right, or your compiler might have a pragma to force tight packing even though fields may have to be accessed by slower multi-instruction sequences.

“Singleton length” is the term you’re least likely to need. It’s the length of a structure with interior padding but without trailing padding. The reason I’m dubbing it “singleton” length is that it might be relevant in situations where you’re declaring a single instance of a struct not in an array.

Consider the following declarations in C on a 64-bit machine:

struct {int64_t a; int32_t b} x;
char y

That structure has a payload length of 12 bytes. Instances of it in an array would normally have a stride length of 16 bytes, with the last two bytes being padding. But in this situation, with a single instance, your compiler might well place the storage for y in the byte immediately following x.b, where there would trailing padding in an array element.

This struct has a singleton length of 12, same as its payload length. But these are not necessarily identical, Consider this:

struct {int64_t a; char b[6]; int32_t c} x;

The way this is normally laid out in memory it will have two bytes of interior padding after b, then 4 bytes of trailing padding after c. Its payload length is 8 + 6 + 4 = 20; its stride length is 8 + 8 + 8 = 24; and its singleton length is 8 + 6 + 2 + 4 = 22.

To avoid confusion, you should develop a habit: any time someone speaks or writes about the “length” of a structure, stop and ask: is this payload length, stride length, or singleton length?

Most usually the answer will be stride length. But someday, most likely when you’re working close to the metal on some low-power embedded system, it might be payload or singleton length – and the difference might actually matter.

Even when it doesn’t matter, having a more exact mental model is good for reducing the frequency of times you have to stop and check yourself because a detail is vague. The map is not the territory, but with a better map you’ll get lost less often.

Insights need you to keep your nerve

Post Syndicated from esr original http://esr.ibiblio.org/?p=8657

This is a story I’ve occasionally told various friends when one of the subjects it touches comes up. I told it again last night, and it occurred to me that I ought to put in the blog. It’s about how, if you want to have productive insights, you need a certain kind of nerve or self-belief.

Many years ago – possibly as far back as the late 80s – I happened across a film of a roomful of Sufi dervishes performing a mystical/devotional exercise called “dhikr”. The film was very old, grainy B&W footage from the early 20th century. It showed a roomful of bearded, turbaned, be-robed men swaying, spinning, and chanting. Some were gazing at bright objects that might have been lamps, or polished metal or jewelry reflecting other lamps – it wasn’t easy to tell from the footage.

I can’t find the footage I saw, but the flavor was a bit like this. No unison movement in what I saw, though – individuals doing different things and ignoring each other, more inward-focused.

The text accompanying the film explained that the intention of “dhikr” is to shut out the imperfect sensory world so the dervish can focus on the pure and holy name of Allah. “Right,” I thought, already having had quite a bit of experience as an experimental mystic myself, “I get this. In Zen language, they’re shutting down the drunken monkeys. Autohypnosis inducing a serene mind, nothing surprising here.”

But there was something else. Something about the induction methods they were using. It all seemed oddly familiar, more than it ought to. I had seen behaviors like this before somewhere, from people who weren’t wearing pre-Kemalist Turkish garb. I watched the film…and it hit me. This was exactly like watching a roomful of people with serious autism!

The rocking. The droning. The fixated behavior, or in the Sufi case the behavior designed to induce fixation. Which immediately led to the next question: why? I think the least hypothesis in cases where you observe parallel behaviors is that they have parallel causation. We know what the Sufis tell us about what they’re doing; might it tell us what the autists are doing what they’re doing?

The Sufis are trying to shut out sense data. What if the autists are too? That would imply that the autists live in a state of what is, for them, perpetual sensory overload. Their dhikr-like behaviors are a coping mechanism, an attempt to turn down the gain on their sensors so they can have some peace inside their own skulls.

The first applications of nerve I want to talk about here are (a) the nerve to believe that autistic behaviors have an explanation more interesting than “uhhh…those people are randomly broken”, and (b) the nerve to believe that you can apply a heuristic like “parallel behavior, parallel causes” to humans when you picked it up from animal ethology.

Insights need creativity and mental flexibility, but they also need you to keep your nerve. I think there are some very common forms of failing to keep your nerve that people who would like to have good and novel ideas self-sabotage with. One is “If that were true, somebody would have noticed it years ago”. Another is “Only certified specialists in X are likely to have good novel ideas about X, and I’m not a specialist in X, so it’s a bad risk to try following through.”

You, dear reader, are almost certainly browsing this blog because I’m pretty good at not falling victim to those, and duly became famous by having a few good ideas that I didn’t drop on the floor. However, in this case, I failed to keep my nerve in another bog-standard way: I believed an expert who said my idea was silly.

That was decades ago. Nowadays, the idea that autists have a sensory-overload problem is not even controversial – in fact it’s well integrated into therapeutic recommendations. I don’t know when that changed, because I haven’t followed autism research closely enough. Might even be the case that somewhere in the research literature, someone other than me has noticed the similarity between semi-compulsive autistic behaviors and Sufi dhikr, or other similar autohypnotic practices associated with mystical schools.

But I got there before the experts did. And dropped the idea because my nerve failed.

Now, it can be argued that there were good reasons for me not to have pursued it. Getting a real hearing for a heterodox idea is difficult in fields where the experts all have their own theories they’re heavily invested in, and success is unlikely enough that perhaps it wasn’t an efficient use of my time to try. That’s a sad reason, but in principle a sound one.

But losing my nerve because an expert laughed at me, that was not sound. I think I wouldn’t make that mistake today; I’m tougher and more confident than I used to be, in part because I’ve had “crazy” ideas that I’ve lived to see become everyone’s conventional wisdom.

You can read this as a variation on a theme I developed in Eric and the Quantum Experts: A Cautionary Tale. But it bears repeating. If you want to be successfully creative, your insights need you to keep your nerve.

2020-04-04 пробно online ИББ

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3440

Преди някакво време затвориха “Кривото”, което беше на ъгъла на “Дондуков” и “Будапеща”, и с него изчезна и ИББ. Аз не бях успявал да стигна до там от много време и точно седмицата, в която обявиха пандемията се канех да ходя…

Та, има разговори да го възобновим, и понеже няма как на живо, тая вечер си направихме едно събиране на jitsi.ludost.net/ibb, експериментално, да видим как е. Бяхме десетина човека, поговорихме си за час-два, пихме по едно, хапнахме си и си отидохме да спим 🙂 Та вероятно тая сряда ще направим пак сбирка, за който му липсва. Не изглежда да е възможно да си пуснем всички камерите и да не ни се подпалят машините, но само с аудио пак е добре.

Определено мисля да го броя в категорията неща, които ни помагат да не полудеем 🙂

PSA: COVID-19 is a bad reason to get a firearm

Post Syndicated from esr original http://esr.ibiblio.org/?p=8627

I’m a long-time advocate of more ordinary citizens getting themselves firearms and learning to use them safely and competently. But this is a public-service announcement: if you’re thinking of running out to buy a gun because of COVID-19, please don’t.

There are disaster scenarios in which getting armed up in a hurry makes sense; the precondition for all of them is a collapse of civil order. That’s not going to happen with COVID-19 – the mortality rate is too low.

Be aware that the gun culture doesn’t like and doesn’t trust panic buyers; they tend to be annoying flake cases who are more of liability than an asset. We prefer a higher-quality intake than we can get in the middle of a plague panic. Slow down. Think. And if you’ve somehow formed the idea that you’re in a zombie movie or a Road Warrior sequel, chill. That’s not a useful reaction; it can lead to panic shootings and those are never good.

I don’t mean to discourage anyone from buying guns in the general case – more armed citizens are a good thing on multiple levels. After we’re through the worst of this would be a good time for it. But do it calmly, learn the Four Rules of Firearms Safety first, and train, train, train. Get good with your weapons, and confident enough not to shoot unless you have to, before the next episode of shit-hits-the-fan.

2020-03-21 threat модели

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3439

Голям проблем е очакването, че всички знаят това, което и ти знаеш. Следващото е провокирано от един разговор преди малко.

В момента всички усилено търсят начин да си пренесат работата от вкъщи. Аз съм от щастливците – при нас по принцип всичко може да се прави remote и сме си направили така нещата (което е следствие от голямото количество админи на глава от фирмата, май сме над 50%). Също така заради естеството на работата сме обмисляли сериозно как да направим нещата, че да са сигурни, и първата стъпка в това е да се реши какъв е threat модела (т.е. “от какво/кого се пазим?”), понеже това определя смислеността на мерките, които се взимат.

Най-лесният въпрос е “от кого се пазим”, да преценим какви са му възможностите, и какви мерки да вземем. Базовите опции са “script kiddies”, “опортюнисти”, “конкуренцията”, “държавата”, “NSA”.

“Script kiddies” са най-лесни, и са в общи линии стандартните мерки, които всички взимат – от това да няма тривиални пароли и да няма не-patch-нати service-и, до това хората да знаят да не отварят опасни документи (или ако работата им е да получават документи, да имат среда, в която да ги гледат безопасно, което е случая с журналистите например).

“Опортюнисти” са хора на вярното място във верния момент. Пример за такова нещо са например работещи в datacenter, които знаят, че ей-сега ще ги уволнят, и си отмъкват диск от произволен сървър. Или крадци, които са се докопали до някакви ваши неща и са ги угасили и събрали, и не искате информацията там да попада в чужди ръце или да може да се открадне. Или някакви по-таргетирани scammer-и, и какво ли не. Тук се иска малко по-сериозно да се обмислят слабите места и да се вземат мерки (които може да са “пълним машината с бетон, завинтваме я за пода и я заваряваме”), които да са смислени за ситуацията.

“Конкуренцията” е много близко до горните, само че по-прицелена. Тук има и други пътища, като напр. легални – не съм запознат с българското законодателство по темата, но който му е важно, ще трябва да си го разгледа). Тук и средствата може да са повече, т.е. може конкуренцията ви да плати на някой от предната точка за нещо по-интересно.

“Държавата” е супер интересна, понеже тук си проличава утопичното мислене на хората. Криптирани файлове, секретни канали и т.н. – голяма част от това пази от предните точки, не от тази. От метода на гумения маркуч (т.е. да те бият докато си кажеш) не се пазим с технически средства, а много повече с оперативни, като например никой в засегнатата юрисдикция да няма достъп до нещата, които се пазят. Особено текущата ситуация с карантината и извънредното положение го показва много добре – ако приемем, че сме на територията на противника, който има няколко порядъка по-големи ресурси от нас, и като изключим хората, които обучават специално за целта (т.е. шпиони), които имат зад гърба си сериозна поддържаща система, и при които има над 10% заловяемост (защото оперативната сигурност е много трудно нещо), няма някой друг, за който да е смислено да мисли подобни мерки.
Или с по-прости думи, ако всичко ви е в държава в извънредно положение, колкото да са и некадърни в някакви отношения органите, със сигурност умеят търсенето и биенето и всякакви мерки трябва да взимат това предвид, например да се стараем да не ни е противник държавата…

“NSA” е по-скоро събирателно за това, което се нарича “APT” (Advanced Persistent Threat, т.е. хора с много акъл, желание и ресурси), и е пример за това кога трябва да имаме airgap firewall-и и други подобни крайни мерки, като фарадееви кафези, неща които изискват минимум двама човека и т.н.. Ако се занимавате с платежни системи и подобни неща, това в общи линии ви влиза в threat модела, иначе е много малко вероятно и много скъпо да се пазите от тях.

Така че, ако си планирате как организацията ви да работи от вкъщи, изяснете си от какво сте се пазили досега, постигнете същото (което в повечето случаи са едни VPN-и, малко обучение и малко хардуер като микрофони), и чак след това решете дали всъщност това е адекватно и го подобрявайте. Но е важно да не се взимат мерки само заради самите мерки.

2020-03-20 първа седмица

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3438

Интересна седмица.

Една прилична част мина в настройване на звука на хора – първо на преподаващите в курса по операционни системи във ФМИ, после на всичките хора от фирмата, наваксване с работата, готвене, още работа, и всякакви неща “какво можем да направим в това положение, което да е смислено”.

В момента курсът по операционни системи ползва за част от нещата комбинация от live streaming (с около 10 секунди латентност, може да се опитаме да го смъкнем) и irc, и за някои други неща jitsi meet (подкарах едно при мен на jitsi.ludost.net, който иска може да го ползва). Като цяло гледаме да ползваме локални ресурси, и от гледна точка на латентност, и от гледна точка на това да не удавим външните тръби без да искаме.
(това с тръбите май малко се преувеличава де, в България има бая хубав интернет и това, че netflix ограничиха да stream-ват само SD по време на пандемията по-скоро говори за липса на капацитет при тях)
(но е доста важно да не зависиш от твърде много външни ресурси)

Jitsi-то и irc-то също са доста полезни, като гледам, а хората за да не полудяват си правят сутрешното кафе по видео конференции. Други хора се организират да сглобяват emergency вентилатори (има някъде fb група по темата, в която се намират и малко медицински лица), някои печатат на 3d принтери компонентите за предпазен шлем при интубиране и подобни (друг линк до файлове за такова нещо) (даже мисля, че принтера в initLab вече е впрегнат по темата).

Забелязах и как като една от бабите излизаше от входа (някъде в началото на седмицата), една от съседките и предложи да и взима отвън каквото и е нужно, да не се излага на риск, накратко, въпреки че има някакви идиоти, времената изглежда все пак да изкарат и хубавото в повечето хора.

Та оцеляхме една седмица. Остават поне още 7. Аз лично ще броя за голямо постижение, ако не се стигне до изкарването на армията по улиците (което може да се случи и без наложителна причина, просто за да се опита някой да трупа политически капитал) или не полудеят всички.

Наздраве и лека нощ.