Tag Archives: blog

Building a Photo Diary Ghost on Amazon Lightsail

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/building-a-photo-diary-ghost-on-amazon-lightsail/

This post was written by Robert Zhu, a principal technical evangelist at AWS and a member of the GraphQL Working Group. 

Ghost is a simple and flexible alternative to WordPress. With Ghost, you can build a beautiful company website, personal portfolio, photo diary, or anything in between.

In this post, I show you how to start a photo diary using Ghost on Amazon Lightsail, AWS’ easiest solution for hosting virtual private servers. Compared to Amazon EC2, Amazon Lightsail takes care of advanced concepts like VPCs, Security Groups, and IAM policies for you until you want to re-engage those services. Amazon Lightsail also bundles monthly network transfer with the instance, whereas Amazon EC2 charges separately for network transfer.

There are two easy ways to get up and running with Ghost on Lightsail:

1. Using the Ghost Blueprint

Lightsail includes a number of common instance images known as “Blueprints.” When you launch a Blueprint, the selected software is installed and preconfigured, along with any dependencies. This is especially handy for Ghost because it saves you from having to install and configure MySQL. You can find our click-to-launch stacks in the Amazon Lightsail console, as shown in the following image.

Lightsail has pre-built Blueprints for you to launch entire applications with your VPS.

To launch a Blueprint instance, navigate to the Lightsail console. Then, select “Apps + OS” on the “Create Instance” page, and choose “Ghost.” Select the $5/month instance (this is the cheapest instance that meets the minimum requirements for running Ghost and its database on a single box).

 

Once the instance is up and running, find its IP address and open it in your browser. It may take a few minutes for Ghost to start running, during which you may see an error in the browser. As you can see, it is easy to start running Ghost on Amazon Lightsail with the Blueprint.

 

2. Running Ghost in a Container

As an alternative to launching a Ghost Blueprint, you can also run Ghost within a Docker container. In order to run Ghost within a container, you must install Docker on the Lightsail instance. Compared to the Blueprint approach, there are two advantages:

  1. Ability to run other applications on your Lightsail instance, such as forums, static websites, APIs, etc.
  2. Painless upgrades to new Ghost versions. When a new version of Ghost is released, you can upgrade with just a single command.

To run Ghost in a container complete the following steps.

  1. Create a Lightsail Instance (select Ubuntu 18.04).
  2. Connect to it using Lightsail’s browser-based SSH client. Once the instance is up and running, run the following commands to install Docker on your Lightsail instance after connecting:

sudo apt-get update && sudo apt-get install docker.io -y

sudo systemctl start docker

sudo systemctl enable docker

       3. To make sure Docker is installed and running, run:

sudo docker run hello-world 

You should see:

      4. Start the ghost container by running:

sudo mkdir /var/lib/ghost

sudo mkdir /var/lib/ghost/content

sudo docker run -d --name blog -p 80:2368 -v /var/lib/ghost/content:/var/lib/ghost/content ghost

The last command runs the “ghost” container from the public Docker registry. It also maps port 80 (HTTP) on the host to port 2368 inside the container where Ghost is listening for HTTP requests. The -v argument maps a path on the host to a path inside the container. This way, any blog content is persisted into /var/lib/ghost/content and doesn’t disappear when you stop/delete/upgrade the container. The -d argument tells Docker to run the container in detached mode.

 

Now you can test your new blog.

  1. Find the public IP address of your Lightsail instance by selecting Manage from the Instance menu: You can find your IP address by clicking "Manage" on the instance.
  2. Paste the IP address into a browser, and you should see your brand new Ghost blog:After following these steps, you will see Ghost being served from the IP address of your VPS.

 

Installing a Custom Theme

You can customize the look and feel of your Ghost blog by installing a custom theme. There are plenty of free and paid themes online. If you’re a designer, you can even build your own theme from scratch.

For my photo diary, I use the London theme. Download the theme by clicking “Clone or download” > “Download ZIP.”

To download the London theme from Github, click on "Clone or download" and then click "Download ZIP".

 

Next, we need to log into the Ghost admin console. If you launched your Ghost instance using the Lightsail blueprint, follow these steps:

  1. SSH into your Ghost instance
  2. You should see the Bitnami welcome message and a bitnami_credentails file under your home directory.After SSHing into your Lightsail blueprint instance, you'll see a bitnami_credentials file in your home directory.
  3. Display the default admin user name and password by typing cat bitnami_credentials.
  4. Use your credentials to administer your ghost instance at http://INSTANCEIPADDRESS/ghost.

If you launched Ghost within a Docker container, then you can administer Ghost by going to http://INSTANCEIPADDRESS/admin.

Once you are logged into the Ghost admin console, click on Settings > Design > Upload a theme.

In the left menu, click on settings, design, then click "upload a theme" on the main page.

You then see a pop-up prompt to upload the theme file. Drag and drop the theme file you downloaded above. Once the upload is complete, click “activate.”

The london theme is perfect for showcasing your photographs

Refresh your ghost blog IP address in another tab, and you should see a bold, new look and feel.

The Ghost admin console is also used for managing content, plugins, users, and more. For example, you can use header and footer scripts to inject code to add features like analytics and comments. By combining these features, you can make your Ghost blog stand out and fit your workflow.

Managing DNS with Lightsail

If you want to add a custom domain for your blog, you can manage that domain within Lightsail. By managing all your DNS records in one place, you can simplify your workflows.

For example, Lightsail offers the ability to assign Static IP addresses to instances, and then automatically associate an A record to a static IP address. This is a quality-of-life improvement that helps you avoid typing the wrong IP address.

Here are the steps to manage your domain with Lightsail:

  1. Register your domain with a domain registrar, such as Route 53 or GoDaddy.
  2. Once registered, create a DNS Zone within Lightsail and note the nam eservers.
  3. In your domain registrar, replace the domain name servers with your Lightsail DNS Zone name servers. If you’re using Route 53, do not replace the name servers in the default hosted zone. Here’s a video that makes everything clear.
  4. After creating your DNS Zone in Lightsail, you can now associate A (address) records with your Lightsail instances.

Once you have a custom domain, you’ll want to consider adding SSL. Here are step-by-step instructions for issuing an SSL certificate using Let’s Encrypt and terminating SSL using NGNIX.

Conclusion

While Ghost is already a very flexible platform, the Ghost community constantly adds new plugins, themes, extensions, and features. Hosting Ghost yourself is a great way to learn some basic server workflows and get the most out of your Ghost blog. And with Lightsail, it’s simple and cheap! Please feel free to contact me with comments and questions. Go ahead and start running Ghost on Amazon Lightsail.

 

About the Author: 
Robert Zhu is a Principal Developer Evangelist at Amazon Web Services. He focuses on APIs, Web, Mobile, and Gaming. Prior to joining AWS, he worked on GraphQL at Facebook. While at Microsoft, he worked on the .net Framework, Windows Server, and Microsoft Game Studios. In his spare time, he loves learning about history, economics, and psychology. You can reach him @rbzhu on twitter or directly via telepathy.

Old CSS, new CSS

Post Syndicated from Eevee original https://eev.ee/blog/2020/02/01/old-css-new-css/

I first got into web design/development in the late 90s, and only as I type this sentence do I realize how long ago that was.

And boy, it was horrendous. I mean, being able to make stuff and put it online where other people could see it was pretty slick, but we did not have very much to work with.

I’ve been taking for granted that most folks doing web stuff still remember those days, or at least the decade that followed, but I think that assumption might be a wee bit out of date. Some time ago I encountered a tweet marvelling at what we had to do without border-radius. I still remember waiting with bated breath for it to be unprefixed!

But then, I suspect I also know a number of folks who only tried web design in the old days, and assume nothing about it has changed since.

I’m here to tell all of you to get off my lawn. Here’s a history of CSS and web design, as I remember it.


(Please bear in mind that this post is a fine blend of memory and research, so I can’t guarantee any of it is actually correct, especially the bits about casuality. You may want to try the W3C’s history of CSS, which is considerably shorter, has a better chance of matching reality, and contains significantly less swearing.)

(Also, this would benefit greatly from more diagrams, but it took long enough just to write.)

The very early days

In the beginning, there was no CSS.

This was very bad.

My favorite artifact of this era is the book that taught me HTML: O’Reilly’s HTML: The Definitive Guide, published in several editions in the mid to late 90s. The book was indeed about HTML, with no mention of CSS at all. I don’t have it any more and can’t readily find screenshots online, but here’s a page from HTML & XHTML: The Definitive Guide, which seems to be a revision (I’ll get to XHTML later) with much the same style. Here, then, is the cutting-edge web design advice of 199X:

Screenshot of a plain website in IE, with plain black text on a white background with a simple image

Clearly delineate headers and footers with horizontal rules.

No, that’s not a border-top. That’s an <hr>. The page title is almost certainly centered with, well, <center>.

The page uses the default text color, background, and font. Partly because this is a guidebook introducing concepts one at a time; partly because the book was printed in black and white; and partly, I’m sure, because it reflected the reality that coloring anything was a huge pain in the ass.

Let’s say you wanted all your <h1>s to be red, across your entire site. You had to do this:

1
<H1><FONT COLOR=red>...</FONT></H1>

every single goddamn time. Hope you never decide to switch to blue!

Oh, and everyone wrote HTML tags in all caps. I don’t remember why we all thought that was a good idea. Maybe this was before syntax highlighting in text editors was very common (read: I was 12 and using Notepad), and uppercase tags were easier to distinguish from body text.

Keeping your site consistent was thus something of a nightmare. One solution was to simply not style anything, which a lot of folks did. This was nice, in some ways, since browsers let you change those defaults, so you could read the Web how you wanted.

A clever alternate solution, which I remember showing up in a lot of Geocities sites, was to simply give every page a completely different visual style. Fuck it, right? Just do whatever you want on each new page.

That trend was quite possibly the height of web design.

Damn, I miss those days. There were no big walled gardens, no Twitter or Facebook. If you had anything to say to anyone, you had to put together your own website. It was amazing. No one knew what they were doing; I’d wager that the vast majority of web designers at the time were clueless hobbyist tweens (like me) all copying from other clueless hobbyist tweens. Half the Web was fan portals about Animorphs, with inexplicable splash pages warning you that their site worked best if you had a 640×480 screen. (Any 12-year-old with insufficient resolution should, presumably, buy a new monitor with their allowance.) Everyone who was cool and in the know used Internet Explorer 3, the most advanced browser, but some losers still used Netscape Navigator so you had to put a “Best in IE” animated GIF on your splash page too.

This was also the era of “web-safe colors” — a palette of 216 colors, where every channel was one of 00, 33, 66, 99, cc, or ff — which existed because some people still had 256-color monitors! The things we take for granted now, like 24-bit color.

In fact, a lot of stuff we take for granted now was still a strange and untamed problem space. You want to have the same navigation on every page on your website? Okay, no problem: copy/paste it onto each page. When you update it, be sure to update every page — but most likely you’ll forget some, and your whole site will become an archaeological dig into itself, with strata of increasingly bitrotted pages.

Much easier was to use frames, meaning the browser window is split into a grid and a different page loads in each section… but then people would get confused if they landed on an individual page without the frames, as was common when coming from a search engine like AltaVista. (I can’t believe I’m explaining frames, but no one has used them since like 2001. You know iframes? The “i” is for inline, to distinguish them from regular frames, which take up the entire viewport.)

PHP wasn’t even called that yet, and nobody had heard of it. This weird “Perl” and “CGI” thing was really strange and hard to understand, and it didn’t work on your own computer, and the errors were hard to find and diagnose, and anyway Geocities didn’t support it. If you were really lucky and smart, your web host used Apache, and you could use its “server side include” syntax to do something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
<BODY>
    <TABLE WIDTH=100% BORDER=0 CELLSPACING=8 CELLPADDING=0>
        <TR>
            <TD COLSPAN=2>
                <!--#include virtual="/header.html" --> 
            </TD>
        </TR>
        <TR>
            <TD WIDTH=20%>
                <!--#include virtual="/navigation.html" --> 
            </TD>
            <TD>
                (actual page content goes here)
            </TD>
        </TR>
    </TABLE>
</BODY>

Mwah. Beautiful. Apache would see the special comments, paste in the contents of the referenced files, and you’re off to the races. The downside was that when you wanted to work on your site, all the navigation was missing, because you were doing it on your regular computer without Apache, and your web browser thought those were just regular HTML comments. It was impossible to install Apache, of course, because you had a computer, not a server.

Sadly, that’s all gone now — paved over by homogenous timelines where anything that wasn’t made this week is old news and long forgotten. The web was supposed to make information eternal, but instead, so much of it became ephemeral. I miss when virtually everyone I knew had their own website. Having a Twitter and an Instagram as your entire online presence is a poor substitute.

So, let’s look at the Space Jam website.

Case study: Space Jam

Space Jam, if you’re not aware, is the greatest movie of all time. It documents Bugs Bunny’s extremely short-lived basketball career, playing alongside a live action Michael Jordan to save the planet from aliens for some reason. It was followed by a series of very successful and critically acclaimed RPG spinoffs, which describe the fallout of the Space Jam and are extremely canon.

And we are truly blessed, for 24 years after it came out, its website is STILL UP. We can explore the pinnacle of 1996 web design, right here, right now.

First, notice that every page of this site is a static page. Not only that, but it’s a static page ending in .htm rather than .html, because people on Windows versions before 95 were still beholden to 8.3 filenames. Not sure why that mattered in a URL, as if you were going to run Windows 3.11 on a Web server, but there you go.

The CSS for the splash page looks like this:

1
<body bgcolor="#000000" background="img/bg_stars.gif" text="#ff0000" link="#ff4c4c" vlink="#ff4c4c" alink="#ff4c4c">

Haha, just kidding! What the fuck is CSS? Space Jam predates it by a month. (I do see a single line in the page source, but I’m pretty sure that was added much later to style some legally obligatory policy links.)

Notice the extremely precise positioning of these navigation links. This feat was accomplished the same way everyone did everything in 1996: with tables.

In fact, tables have one functional advantage over CSS for layout, which was very important in those days, and not only because CSS didn’t exist yet. You see, you can ctrl-click to select a table cell and even drag around to select all of them, which shows you how the cells are arranged and functions as a super retro layout debugger. This was great because the first meaningful web debug tool, Firebug, wasn’t released until 2006 — a whole decade later!

Screenshot of the Space Jam website with the navigation table's cells selected, showing how the layout works

The markup for this table is overflowing with inexplicable blank lines, but with those removed, it looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<table width=500 border=0>
<TR>
<TD colspan=5 align=right valign=top>
</td></tr>
<tr>
<td colspan=2 align=right valign=middle>
<br>
<br>
<br>
<a href="cmp/pressbox/pressboxframes.html"><img src="img/p-pressbox.gif" height=56 width=131 alt="Press Box Shuttle" border=0></a>
</td>
<td align=center valign=middle>
<a href="cmp/jamcentral/jamcentralframes.html"><img src="img/p-jamcentral.gif" height=67 width=55 alt="Jam Central" border=0></a>
</td>
<td align=center valign=top>
<a href="cmp/bball/bballframes.html"><img src="img/p-bball.gif" height=62 width=62 alt="Planet B-Ball" border=0></a>
</td>
<td align=center valign=bottom>
<br>
<br>
<a href="cmp/tunes/tunesframes.html"><img src="img/p-lunartunes.gif" height=77 width=95 alt="Lunar Tunes" border=0></a>
</td>
</tr>
<tr>
<td align=middle valign=top>
<br>
<br>
<a href="cmp/lineup/lineupframes.html"><img src="img/p-lineup.gif" height=52 width=63 alt="The Lineup" border=0></a>
</td>
<td colspan=3 rowspan=2 align=right valign=middle>
<img src="img/p-jamlogo.gif" height=165 width=272 alt="Space Jam" border=0>
</td>
<td align=right valign=bottom>
<a href="cmp/jump/jumpframes.html"><img src="img/p-jump.gif" height=52 width=58 alt="Jump Station" border=0></a>
</td>
</tr>
...
</table>

That’s the first two rows, including the logo. You get the idea. Everything is laid out with align and valign on table cells; rowspans and colspans are used frequently; and there are some <br>s thrown in for good measure, to adjust vertical positioning by one line-height at a time.

Other fantastic artifacts to be found on this page include this header, which contains Apache SSI syntax! This must’ve quietly broken when the site was moved over the years; it’s currently hosted on Amazon S3. You know, Amazon? The bookstore?

1
2
3
4
5
6
7
<table border=0 cellpadding=0 cellspacing=0 width=488 height=60>
<tr>
<td align="center"><!--#include virtual="html.ng/site=spacejam&type=movie&home=no&size=234&page.allowcompete=no"--></td>
<td align="center" width="20"></td>
<td align="center"><!--#include virtual="html.ng/site=spacejam&type=movie&home=no&size=234"--></td>
</tr>
</table>

Okay, let’s check out jam central. I’ve used my browser dev tools to reduce the viewport to 640×480 for the authentic experience (although I’d also have lost some vertical space to the title bar, taskbar, and five or six IE toolbars).

Note the frames: the logo in the top left leads back to the landing page, cleverly saving screen space on repeating all that navigation, and the top right is a fucking ad banner which has been blocked like seven different ways. All three parts are separate pages.

Screenshot of the Space Jam website's 'Jam Central'

Note also the utterly unreadable red text on a textured background, one of the truest hallmarks of 90s web design. “Why not put that block of text on an easier-to-read background?” you might ask. You imbecile. How would I possibly do that? Only the <body> has a background attribute! I could use a table, but tables only support solid background colors, and that would look so boring!

But wait, what is this new navigation widget? How are the links all misaligned like that? Is this yet another table? Well, no, although filling a table with chunks of a sliced-up image wasn’t uncommon. But this is an imagemap, a long-forgotten HTML feature. I’ll just show you the source:

1
2
3
4
5
6
7
8
<img src="img/m-central.jpg" height=301 width=438 border=0 alt="navigation map" usemap="#map"><br>

<map name="map">
<area shape="rect" coords="33,92,178,136" href="prodnotesframes.html" target="_top">
<area shape="rect" coords="244,111,416,152" href="photosframes.html" target="_top">
<area shape="rect" coords="104,138,229,181" href="filmmakersframes.html" target="_top">
<area shape="rect" coords="230,155,334,197" href="trailerframes.html" target="_top">
</map>

I assume this is more or less self-explanatory. The usemap attribute attaches an image map, which is defined as a bunch of clickable areas, beautifully encoded as inscrutable lists of coordinates or something.

And this stuff still works! This is in HTML! You could use it right now! Probably don’t though!

The thumbnail grid

Let’s look at one more random page here. I’d love to see some photos from the film. (Wait, photos? Did we not know what “screenshots” were yet?)

Screenshot of the Space Jam website's photos page

Another frameset, but arranged differently this time.

1
<body bgcolor="#7714bf" background="img/bg-jamcentral.gif" text="#ffffff" link="#edb2fc" vlink="#edb2fc" alink="#edb2fc">

They did an important thing here: since they specified a background image (which is opaque), they also specified a background color. Without it, if the background image failed to load, the page would be white text on the default white background, which would be unreadable.

(That’s still an important thing to keep in mind. I feel like modern web development tends to assume everything will load, or sees loading as some sort of inconvenience to be worked around, but not everyone is working on a wired connection in a San Francisco office twenty feet away from a backbone.)

But about the page itself. Thumbnail grids are a classic problem of web design, dating all the way back to… er… well, at least as far back as Space Jam. The main issue is that you want to put things next to each other, whereas HTML defaults to stacking everything in one big column. You could put all the thumbnails inline, in a single row of (wrapping) text, but that wouldn’t be much of a grid — and you usually want each one to have some sort of caption.

Space Jam’s approach was to use the only real tool anyone had in their toolbox at the time: a table. It’s structured like this:

1
2
3
4
5
<table cellpadding=10>
<tr><td align=center><a href="..."><img src="..."></a></td>...</tr>
<tr>...</tr>
<tr>...</tr>
<table>

A 3×3 grid of thumbnails, left to the browser to arrange. (The last image, on a row of its own, isn’t actually part of the table.) This can’t scale to fit your screen, but everyone’s screen was pretty tiny back then, so that was slightly less of a concern. They didn’t add captions here, but since every thumbnail is wrapped in a table cell, they easily could have.

This was the state of the art in thumbnail grids in 1996. We’ll be revisiting this little UI puzzle a few times; you can see live examples (and view source for sample markup) on a separate page.

But let’s take a moment to appreciate the size of the “full-size, full-color, internet-quality” movie screenshots on my current monitor.

Screenshot of one of the Space Jam website's full-size photos, fullscreened on my monitor

Hey, though, they’re less than 16 KB! That’ll only take nine seconds to download.

(I’m reminded of the problem of embedded video, which wasn’t solved until HTML5’s <video> tag some years later. Until then, you had to use a binary plugin, and all of them were terrible.)

(Oh, by the way: images within links, by default, have a link-colored border around them. Image links are usually self-evident, so this was largely annoying, and until CSS you had to disable them for every single image with <img border=0>.)

The regular early days

So that’s where we started, and it sucked. If you wanted any kind of consistency on more than a handful of pages, your options were very limited, and they were pretty much limited to a whole lot of copying and pasting. The Space Jam website opted to, for the most part, not bother at all — as did many others.

Then CSS came along, it was a fucking miracle. All that inline repetition went away. You want all your top-level headings to be a particular color? No problem:

1
2
3
H1 {
    color: #FF0000;
}

Bam! You’re done. No matter how many <h1>s you have in your document, every single one of them will be eye-searing red, and you never have to think about it again. Even better, you can put that snippet in its own file and have that questionable aesthetic choice applied to every page of your whole site with almost no effort! The same applied to your gorgeous tiling background image, the colors of your links, and the size of the font in your tables.

(Just remember to wrap the contents of your <style> tags in HTML comments, or old browsers without CSS support will display them as text.)

You weren’t limited to styling tags en masse, either. CSS introduced “classes” and “IDs” to target only specifically flagged elements. A selector like P.important would only affect <P CLASS="important">, and #header would only affect <H1 ID="header">. (The difference is that IDs are intended to be unique in a document, whereas classes can be used any number of times.) With these tools, you could effectively invent your own tags, giving you a customized version of HTML specific to your website!

This was a huge leap forward, but at the time, no one (probably?) was thinking of using CSS to actually arrange the page. When CSS 1 was made a recommendation in December ‘96, it barely addressed layout at all. All it did was divorce HTML’s existing abilities from the tags they were attached to. We had font colors and backgrounds because <FONT COLOR> and <BODY BACKGROUND> existed. The only feature that even remotely affected where things were positioned was the float property, the equivalent to <IMG ALIGN>, which pulled an image to the side and let text flow around it, like in a magazine article. Hardly whelming.

This wasn’t too surprising. HTML hadn’t had any real answers for layout besides tables, and the table properties were too complicated to generalize in CSS and too entangled with the tag structure, so there was nothing for CSS 1 to inherit. It merely reduced the repetition in what we were already doing with e.g. <FONT> tags — making Web design less tedious, less error-prone, less full of noise, and much more maintainable. A pretty good step forward, and everyone happily adopted it for that, but tables remained king for arranging your page.

That was okay, though; all your blog really needed was a header and a sidebar, which tables could do just fine, and it wasn’t like you were going to overhaul that basic structure very often. Copy/pasting a few lines of <TABLE BORDER=0> and <TD WIDTH=20%> wasn’t nearly as big a deal.

For some span of time — I want to say a couple years, but time passes more slowly when you’re a kid — this was the state of the Web. Tables for layout, CSS for… well, style. Colors, sizes, bold, underline. There was even this sick trick you could do with links where they’d only be underlined when the mouse was pointing at them. Tubular!

(Fun fact: HTML email is still basically trapped in this era.)

(And here’s about where I come in, at the ripe old age of 11, with no clue what I was doing and mostly learning from other 11-year-olds who also had no clue what they were doing. But that was fine; a huge chunk of the Web was 11-year-olds making their own websites, and it was beautiful. Why would you go to a business website when you can take a peek into the very specific hobbies of someone on the other side of the planet?)

The dark times

A year and a half later, in mid ‘98, we were gifted CSS 2. (I love the background on this page, by the way.) This was a modest upgrade that addressed a few deficiencies in various areas, but most interesting was the addition of a couple positioning primitives: the position property, which let you place elements at precise coordinates, and the inline-block display mode, which let you stick an element in a line of text like you could do with images.

Such tantalizing fruit, just out of reach! Using position seemed nice, but pixel-perfect positioning was at serious odds with the fluid design of HTML, and it was difficult to make much of anything that didn’t fall apart on other screen sizes or have other serious drawbacks. This humble inline-block thing seemed interesting enough; after all, it solved the core problem of HTML layout, which is putting things next to each other. But at least for the moment, no browser implemented it, and it was largely ignored.

I can’t say for sure if it was the introduction of positioning or some other factor, but something around this time inspired folks to try doing layout in CSS. Ideally, you would completely divorce the structure of your page from its appearance. A website even came along to take this principle to the extreme — CSS Zen Garden is still around, and showcases the same HTML being radically transformed into completely different designs by applying different stylesheets.

Trouble was, early CSS support was buggy as hell. In retrospect, I suspect browser vendors merely plucked the behavior off of HTML tags and called it a day. I’m delighted to say that RichInStyle still has an extensive list of early browser CSS bugs up; here are some of my favorites:

  • IE 3 would ignore all but the last <style> tag in a document.

  • IE 3 ignored pseudo-classes, so a:hover would be treated as a.

  • IE 3 and IE 4 treated auto margins as zero. Actually, I think this one might’ve persisted all the way to IE 6. But that was okay, because IE 6 also incorrectly applied text-align: center to block elements.

  • If you set a background image to an absolute URL, IE 3 would try to open the image in a local program, as though you’d downloaded it.

  • Netscape 4 understood an ID selector like #id, but ignored h1#id as invalid.

  • Netscape 4 didn’t inherit properties — including font and text color! — into table cells.

  • Netscape 4 applied properties on <li> to the list marker, rather than the contents.

  • If the same element has both float and clear (not unreasonable), Netscape 4 for Mac crashes.

This is what we had to work with. And folks wanted to use CSS to lay out an entire page? Ha.

Yet the idea grew in popularity. It even became a sort of elitist rallying cry, a best practice used to beat other folks over the head. Tables for layout are just plain bad, you’d hear! They confuse screenreaders, they’re semantically incorrect, they interact poorly with CSS positioning! All of which is true, but it was a much tougher pill to swallow when the alternative was—

Well, we’ll get to that in a moment. First, some background on the Web landscape circa 2000.

The end of the browser wars and subsequent stagnation

The short version is: this company Netscape had been selling its Navigator browser (to businesses; it was free for personal use), and then Microsoft entered the market with its completely free Internet Explorer browser, and then Microsoft had the audacity to bundle IE with Windows. Can you imagine? An operating system that comes with a browser? This was a whole big thing, Microsoft was sued over it, and they lost, and the consequence was basically nothing.

But it wouldn’t have mattered either way, because they’d still done it, and it had worked. IE pretty much annihilated Netscape’s market share. Both browsers were buggy as hell, and differently buggy as hell, so a site built exclusively against one was likely to be a big mess when viewed in the other — this meant that when Netscape’s market share dropped, web designers paid less and less attention to it, and less of the Web worked in it, and its market share dropped further.

Sucks for you if you don’t use Windows, I guess. Which is funny, because there was an IE for Mac 5.5, and it was generally less buggy than IE 6. (Incidentally, Bill Gates wasn’t so much a brilliant nerd as an aggressive and ruthless businessman who made his fortune by deliberately striving to annihilate any competition standing in his way and making computing worse overall as a result, just saying.)

By the time Windows XP shipped in mid 2001, with Internet Explorer 6 built in, Netscape had gone from a juggernaut to a tiny niche player.

And then, having completely and utterly dominated, Microsoft stopped. Internet Explorer had seen a release every year or so since its inception, but IE 6 was the last release for more than five years. It was still buggy, but that was less noticeable when there was no competition, and it was good enough. Windows XP, likewise, was good enough to take over the desktop, and there wouldn’t be another Windows for just as long.

The W3C, the group who write the standards (not to be confused with W3Schools, who are shady SEO leeches), also stopped. HTML had seen several revisions throughout the mid 90s, and then froze as HTML 4. CSS had gotten an update in only a year and a half, and then no more; the minor update CSS 2.1 wouldn’t hit Candidate Recommendation status until early 2004, and took another seven years to be finalized.

With IE 6’s dominance, it was as if the entire Web was frozen in time. Standards didn’t matter, because there was effectively only one browser, and whatever it did became the de facto standard. As the Web grew in popularity, IE’s stranglehold also made it difficult to use any platform other than Windows, since IE was Windows-only and it was a coin flip whether a website would actually work with any other browser.

(One begins to suspect that monopolies are bad. There oughta be a law!)

In the meantime, Netscape had put themselves in an even worse position by deciding to do a massive rewrite of their browser engine, culimating in the vastly more standards-compliant Netscape 6 — at the cost of several years away from the market while IE was kicking their ass. It never broke 10% market share, while IE’s would peak at 96%. On the other hand, the new engine was open sourced as the Mozilla Application Suite, which would be important in a few years.

Before we get to that, some other things were also happening.

Quirks mode

All early CSS implementations were riddled with bugs, but one in particular is perhaps the most infamous CSS bug of all time: the box model bug.

You see, a box (the rectangular space taken up by an element) has several measurements: its own width and height, then surrounding whitespace called padding, then an optional border, then a margin separating it from neighboring boxes. CSS specifies that these properties are all additive. A box with these styles:

1
2
3
    width: 100px;
    padding: 10px;
    border: 2px solid black;

…would thus be 124 pixels wide, from border to border.

IE 4 and Netscape 4, on the other hand, took a different approach: they treated width and height as measuring from border to border, and they subtracted the border and padding to get the width of the element itself. The same box in those browsers would be 100 pixels wide from border to border, with 76 pixels remaining for the content.

This conflict with the spec was not ideal, and IE 6 set out to fix it. Unfortunately, simply making the change would mean completely breaking the design of a whole lot of websites that had previously worked in both IE and Netscape.

So the IE team came up with a very strange compromise: they declared the old behavior (along with several other major bugs) as “quirks mode” and made it the default. The new “strict mode” or “standards mode” had to be opted into, by placing a “doctype” at the beginning of your document, before the <html> tag. It would look something like this:

1
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">

Everyone had to paste this damn mess of a line at the top of every single HTML document for years. (HTML5 would later simplify it to <!DOCTYPE html>.) In retrospect, it’s a really strange way to opt into correct CSS behavior; doctypes had been part of the HTML spec since way back when it was an RFC. I’m guessing the idea was that, since nobody bothered actually including one, it was a convenient way to allow opting in without requiring proprietary extensions just to avoid behavior that had been wrong in the first place. Good for the IE team!

The funny thing is, quirks mode still exists and is still the default in all browsers, twenty years later! The exact quirks have varied over time, and in particular neither Chrome nor Firefox use the IE box model even in quirks mode, but there are still quite a few other emulated bugs.

Modern browsers also have “almost standards” mode, which emulates only a single quirk, perhaps the second most infamous one: if a table cell contains only a single image, the space under the baseline is removed. Under normal CSS rules, the image is sitting within a line of (otherwise empty) text, which requires some space reserved underneath for descenders — the tails on letters like y. Early browsers didn’t handle this correctly, and some otherwise strict-mode websites from circa 2000 rely on it — e.g., by cutting up a large image and arranging the chunks in table cells, expecting them to display flush against each other — hence the intermediate mode to keep them limping along.

But getting back to the past: while this was certainly a win for standards (and thus interop), it created a new problem. Since IE 6 dominated, and doctypes were optional, there was little compelling reason to bother with strict mode. Other browsers ended up emulating it, and the non-standard behavior became its own de facto standard. Web designers who cared about this sort of thing (and to our credit, there were a lot of us) made a rallying cry out of enabling strict mode, since it was the absolute barest minimum step towards ensuring compatibility with other browsers.

The rise and fall of XHTML

Meanwhile, the W3C had lost interest in HTML in favor of developing XHTML, an attempt to redesign HTML with the syntax of XML rather than SGML.

(What on Earth is SGML, you ask? I don’t know. Nobody knows. It’s the grammer HTML was built on, and that’s the only reason anyone has heard of it.)

To their credit, there were some good reasons to do this at the time. HTML was generally hand-written (as it still is now), and anything hand-written is likely to have the occasional bugs. Browsers weren’t in the habit of rejecting buggy HTML outright, so they had various error-correction techniques — and, as with everything else, different browsers handled errors differently. Slightly malformed HTML might appear to work fine in IE 6 (where “work fine” means “does what you hoped for”), but turn into a horrible mess in anything else.

The W3C’s solution was XML, because their solution to fucking everything in the early 2000s was XML. If you’re not aware, XML takes a much more explicit and aggressive approach to error handling — if your document contains a parse error, the entire document is invalid. That means if you bank on XHTML and make a single typo somewhere, nothing at all renders. Just an error.

This sucked. It sounds okay on the face of things, but consider: generic XML is usually assembled dynamically with libraries that treat a document as a tree you manipulate, then turn it all into text when you’re done. That’s great for the common use of XML as data serialization, where your data is already a tree and much of the XML structure is simple and repetitive and easy to squirrel away in functions.

HTML is not like that. An HTML document has little reliable repeating structure; even this blog post, constructed mostly from <p> tags, also contains surprise <em>s within body text and the occasional <h2> between paragraphs. That’s not fun to express as a tree. And this is a big deal, because server-side rendering was becoming popular around the same time, and generated HTML was — still is! — put together with templates that treat it as a text stream.

If HTML were only written as complete static documents, then XHTML might have worked out — you write a document, you see it in your browser, you know it works, no problem. But generating it dynamically and risking that particular edge cases might replace your entire site with an unintelligible browser error? That sucks.

It certainly didn’t help that we were just starting to hear about this newfangled Unicode thing around this time, and it was still not always clear how exactly to make that work, and one bad UTF-8 sequence is enough for an entire XML document to be considered malformed!

And so, after some dabbling, XHTML was largely forgotten. Its legacy lives on in two ways:

  • It got us all to stop using uppercase tag names! So long <BODY>, hello <body>. XML is case-sensitive, you see, and all the XHTML tags were defined in lowercase, so uppercase tags simply would not work. (Fun fact: to this day, JavaScript APIs report HTML tag names in uppercase.) The increased popularity of syntax highlighting probably also had something to do with this; we weren’t all still using Notepad as we had been in 1997.

  • A bunch of folks still think self-closing tags are necessary. You see, HTML has two kinds of tags: containers like <p>...</p> and markers like <br>. Since a <br> can’t possibly contain anything, there’s no such thing as </br>. XML, as a generic grammar, doesn’t have this distinction; every tag must be closed, but as a shortcut, you can write <br/> to mean <br></br>.

    XHTML has been dead for years, but for some reason, I still see folks write <br/> in regular HTML documents. Outside of XML, that slash doesn’t do anything; HTML5 has defined it for compatibility reasons, but it’s silently ignored. It’s even actively harmful, since it might lead you to believe that <script/> is an empty <script> tag — but in HTML, it definitely is not!

I do miss one thing about XHTML. You could combine it with XSLT, the XML templating meta-language, to do in-browser templating (i.e., slot page-specific contents into your overall site layout) with no scripting required. It’s the only way that’s ever been possible, and it was cool as all hell when it worked, but the drawbacks were too severe when it didn’t. Also, XSLT is totally fucking incomprehensible.

The beginning of CSS layout

Back to CSS!

You’re an aspiring web designer. For whatever reason, you want to try using this CSS thing to lay out your whole page, even though it was clearly intended just for colors and stuff. What do you do?

As I mentioned before, your core problem is putting things next to each other. Putting things on top of each other is a non-problem — that’s the normal behavior of HTML. The whole reason everyone uses tables is that you can slop stuff into table cells and have it laid out side-by-side, in columns.

Well, tables seem to be out. CSS 2 had added some element display modes that corresponded to the parts of a table, but to use them, you’d have to have the same three levels of nesting as real tables: the table itself, then a row, then a cell. That doesn’t seem like a huge step up, and anyway, IE won’t support them until the distant future.

There’s that position thing, but it seems to make things overlap more often than not. Hmm.

What does that leave?

Only one tool, really: float.

I said that float was intended for magazine-style “pull” images, which is true, but CSS had defined it fairly generically. In principle, it could be applied to any element. If you wanted a sidebar, you could tell it to float to the left and be 20% the width of the page, and you’d get something like this:

1
2
3
4
+---------+
| sidebar | Hello, and welcome to my website!
|         |
+---------+

Alas! Floating has the secondary behavior that text wraps around it. If your page text was ever longer than your sidebar, it would wrap around underneath the sidebar, and the illusion would shatter. But hey, no problem. CSS specified that floats don’t wrap around each other, so all you needed to do was float the body as well!

1
2
3
4
5
6
7
+---------+ +-----------------------------------+
| sidebar | | Hello, and welcome to my website! |
|         | |                                   |
+---------+ | Here's a longer paragraph to show |
            | that my galaxy brain CSS float    |
            | nonsense prevents text wrap.      |
            +-----------------------------------+

This approach worked, but its limitations were much more obvious than those of tables. If you added a footer, for example, then it would try to fit to the right of the body text — remember, all of that is “pull” floats, so as far as the browser is concerned, the “cursor” is still at the top. So now you need to use clear, which bumps an element down below all floats, to fix that. And if you made the sidebar 20% wide and the body 80% wide, then any margin between them would add to that 100%, making the page wider than the viewport, so now you have an ugly horizontal scrollbar, so you have to do some goofy math to fix that as well. If you have borders or backgrounds on either part, then it was a little conspicuous that they were different heights, so now you have to do some truly grotesque stuff to fix that. And the more conscientious authors noticed that screenreaders would read the entire sidebar before getting to the body text, which is a pretty rude thing to subject blind visitors to, so they came up with yet more elaborate setups to have a three-column layout with the middle column appearing first in the HTML.

The result was a design that looked nice and worked well and scaled correctly, but backed by a weird mess of CSS. None of what you were writing actually corresponded to what you wanted — these are major parts of your design, not one-off pull quotes! It was difficult to understand the relationship between the layout-related CSS and what appeared on the screen, and that would get much worse before it got better.

Thumbnail grid 2

Armed with a new toy, we can improve that thumbnail grid. The original table-based layout was, even if you don’t care about tag semantics, incredibly tedious. Now we can do better!

1
2
3
4
5
6
<ul class="thumbnail-grid">
    <li><img src="..."><br>caption</li>
    <li><img src="..."><br>caption</li>
    <li><img src="..."><br>caption</li>
    ...
</ul>

This is the dream of CSS: your HTML contains the page data in some sensible form, and then CSS describes how it actually looks.

Unfortunately, with float as the only tool available to us, the results are a bit rough. This new version does adapt better to various screen sizes, but it requires some hacks: the cells have to be a fixed height, centering the whole grid is fairly complicated, and the grid effect falls apart entirely with wider elements. It’s becoming clear that what we wanted is something more like a table, but with a flexible number of columns. This is just faking it.

You also need this weird “clearfix” thing, an incantation that would become infamous during this era. Remember that a float doesn’t move the “cursor” — a fake idea I’m using, but close enough. That means that this <ul>, which is full only of floated elements, has no height at all. It ends exactly where it begins, with all the floated thumbnails spilling out below it. Worse, because any subsequent elements don’t have any floated siblings, they’ll ignore the thumbnails entirely and render normally from just below the empty “grid” — producing an overlapping mess!

The solution is to add a dummy element at the end of the list which takes up no space, but has the CSS clear: both — bumping it down below all floats. That effectively pushes the bottom of the <ul> under all the individual thumbnails, so it fits snugly around them.

Browsers would later support the ::before and ::after generated content” pseudo-elements, which let us avoid the dummy element entirely. Stylesheets from the mid-00s were often littered with stuff like this:

1
2
3
4
5
.thumbnail-grid::after {
    content: '';
    display: block;
    clear: both;
}

Still, it was better than tables.

DHTML

As a quick aside into the world of JavaScript, the newfangled position property did give us the ability to do some layout things dynamically. I heartily oppose such heresy, not least because no one has ever actually done it right, but it was nice for some toys.

Thus began the era of “dynamic HTML” — i.e., HTML affected by JavaScript, a term that has fallen entirely out of favor because we can’t even make a fucking static blog without JavaScript any more. In the early days it was much more innocuous, with teenagers putting sparkles that trailed behind your mouse cursor or little analog clocks that ticked by in real time.

The most popular source of these things was Dynamic Drive, a site that miraculously still exists and probably has a bunch of toys not updated since the early 00s.

But if you don’t like digging, here’s an example: every year (except this year when I forgot oops), I like to add confetti and other nonsense to my blog on my birthday. I’m very lazy so I started this tradition by using this script I found somewhere, originally intended for snowflakes. It works by placing a bunch of images on the page, giving them position: absolute, and meticulously altering their coordinates over and over.

Contrast this with the version I wrote from scratch a couple years ago, which has only a tiny bit of JS to set up the images, then lets the browser animate them with CSS. It’s slightly less featureful, but lets the browser do all the work, possibly even with hardware acceleration. How far we’ve come.

Web 2.0

Dark times can’t last forever. A combination of factors dragged us towards the light.

One of the biggest was Firefox — or, if you were cool, originally Phoenix and then Firebird — which hit 1.0 in Nov ‘04 and went on to take a serious bite out of IE. That rewritten Netscape 6 browser core, the heart of the Mozilla Suite, had been extracted into a standalone browser. It was quick, it was simple, it was much more standard-compliant, and absolutely none of that mattered.

No, Firefox really got a foothold because it had tabs. IE 6 did not have tabs; if you wanted to open a second webpage, you opened another window. It fucking sucked, man. Firefox was a miracle.

Firefox wasn’t the first tabbed browser, of course; the full Mozilla Suite’s browser had them, and the obscure (but scrappy!) Opera had had them for ages. But it was Firefox that took off, for various reasons, not least of which was that it didn’t have a giant fucking ad bar at the top like Opera did.

Designers did push for Firefox on standards grounds, of course; it’s just that that angle primarily appealed to other designers, not so much to their parents. One of the most popular and spectacular demonstrations was the Acid2 test, intended to test a variety of features of then-modern Web standards. It had the advantage of producing a cute smiley face when rendered correctly, and a fucking nightmare hellscape in IE 6. Early Firefox wasn’t perfect, but it was certainly much closer, and you could see it make progress until it fully passed with the release of Firefox 3.

It also helped that Firefox had a faster JavaScript engine, even before JIT caught on. Much, much faster. Like, as I recall, IE 6 implemented getElementById by iterating over the entire document, even though IDs are unique. Glance at some old jQuery release announcements; they usually have some performance charts, and everything else absolutely dwarfs IE 6 through 8.

Oh, and there was that whole thing where IE 6 was a giant walking security hole, especially with its native support for arbitrary binary components that only needed a “yes” click on an arcane dialog to get full and unrestricted access to your system. Probably didn’t help its reputation.

Anyway, with something other than IE taking over serious market share, even the most ornery designers couldn’t just target IE 6 and call it a day any more. Now there was a reason to use strict mode, a reason to care about compatibility and standards — which Firefox was making a constant effort to follow better, while IE 6 remained stagnant.

(I’d argue that this effect opened the door for OS X to make some inroads, and also for the iPhone to exist at all. I’m not kidding! Think about it; if the iPhone browser hadn’t actually worked with anything because everyone was still targeting IE 6, it’d basically have been a more expensive Palm. Remember, at first Apple didn’t even want native apps; it bet on the Web.)

(Speaking of which, Safari was released in Jan ‘03, based on a fork of the KHTML engine used in KDE’s Konqueror browser. I think I was using KDE at the time, so this was very exciting, but no one else really cared about OS X and its 2% market share.)

Another major factor appeared on April Fools’ Day, 2004, when Google announced Gmail. Ha, ha! A funny joke. Webmail that isn’t terrible? That’s a good one, Google.

Oh. Oh, fuck. Oh they’re not kidding. How the fuck does this even work

The answer, as every web dev now knows, is XMLHttpRequest — named for the fact that nobody has ever once used it to request XML. Apparently it was invented by Microsoft for use with Exchange, then cloned early on by Mozilla, but I’m just reading this from Wikipedia and you can do that yourself.

The important thing is, it lets you make an HTTP request from JavaScript. You could now update only part a page with new data, completely in the background, without reloading. Nobody had heard of this thing before, so when Google dropped an entire email client based on it, it was like fucking magic.

Arguably the whole thing was a mistake and has led to a hell future where static pages load three paragraphs of text in the background using XHR for no goddamn reason, but that’s a different post.

Along similar lines, August 2006 saw the release of jQuery, a similar miracle. Not only did it paper over the differences between IE’s “JScript” APIs and the standard approaches taken by everyone else (which had been done before by other libraries), but it made it very easy to work with whole groups of elements at a time, something that had historically been a huge pain in the ass. Now you could fairly easily apply CSS all over the place from JavaScript! Which is a bad idea! But everything was so bad that we did it anyway!

Hold on, I hear you cry. These things are about JavaScript! Isn’t this a post about CSS?

You’re absolutely right! I mention the rise of JavaScript because I think it led directly to the modern state of CSS, thanks to an increase in one big factor:

Ambition

Firefox showed us that we could have browsers that actually, like, improve — every new improvement on Acid2 was exciting. Gmail showed us that the Web could do more than show plain text with snowflakes in front.

And folks started itching to get fancy.

The problem was, browsers hadn’t really gotten any better yet. Firefox was faster in some respects, and it adhered more closely to the CSS spec, but it didn’t fundamentally do anything that browsers weren’t supposed to be able to do already. Only the tooling had improved, and that mostly affected JavaScript. CSS was a static language, so you couldn’t write a library to make it better. Generating CSS with JavaScript was a possibility, but boy oh boy is that ever a bad idea.

Another problem was that CSS 2 was only really good at styling rectangles. That was fine in the 90s, when every OS had the aesthetic of rectangles containing more rectangles. But now we were in the days of Windows XP and OS X, where everything was shiny and glossy and made of curvy plastic. It was a little embarrassing to have rounded corners and neatly shaded swooshes in your file browser and nowhere on the Web.

Thus began a new reign of darkness.

The era of CSS hacks

Designers wanted a lot of things that CSS just could not offer.

  • Round corners were a big one. Square corners had fallen out of vogue, and now everyone wanted buttons with round corners, since they were The Future. (Native buttons also went out of vogue, for some reason.) Alas, CSS had no way to do this. Your options were:

    1. Make a fixed-size background image of a rounded rectangle and put it on a fixed-size button. Maybe drop the text altogether and just make the whole thing an image. Eugh.

    2. Make a generic background image and scale it to fit. More clever, but the corners might end up not round.

    3. Make the rounded rectangle, cut out the corner and edges, and put them in a 3×3 table with the button label in the middle. Even better, use JavaScript to do this on the fly.

    4. Fuck it, make your entire website one big Flash app lol

    Another problem was that IE 6 didn’t understand PNGs with 8-bit alpha; it could only correctly display PNGs with 1-bit alpha, i.e. every pixel is either fully opaque or fully transparent, like GIFs. You had to settle for jagged edges, bake a solid background color into the image, or apply various fixes that centered around this fucking garbage nonsense:

    1
    filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(src='bite-my-ass.png');
    
  • Along similar lines: gradients and drop shadows! You can’t have fancy plastic buttons without those. But here you were basically stuck with making images again.

  • Translucency was a bit of a mess. Most browsers supported the CSS 3 opacity property since very early on… except IE, which needed another wacky Microsoft-specific filter thing. And if you wanted only the background translucent, you’d need a translucent PNG, which… well, you know.

  • Since the beginning, jQuery shipped with built-in animated effects like fadeIn, and they started popping up all over the place. It was kind of like the Web equivalent of how every Linux user in the mid-00s (and I include myself in this) used that fucking Compiz cube effect.

    Obviously you need JavaScript to trigger an element’s disappearance in most interesting cases, but using it to control the actual animation was a bit heavy-handed and put a strain on browsers. Tabbed browsing compounded this, since browsers were largely single-threaded, and for various reasons, every open page ran in the same thread.

  • Oh! Alternating background colors on table rows. This has since gone out of style, but I think that’s a shame, because man did it make tables easier to read. But CSS had no answer for this, so you had to either give every other row a class like <tr class="odd"> (hope the table’s generated with code!) or do some jQuery nonsense.

  • CSS 2 introduced the > child selector, so you could write stuff like ul.foo > li to style special lists without messing up nested lists, and IE 6! Didn’t! Fucking! Support! It!

All those are merely aesthetic concerns, though. If you were interested in layout, well, the rise of Firefox had made your life at once much easier and much harder.

Remember inline-block? Firefox 2 actually supported it! It was buggy and hidden behind a vendor prefix, but it more or less worked, which let designers start playing with it. And then Firefox 3 supported it more or less fully, which felt miraculous. Version 3 of our thumbnail grid is as simple as a width and inline-block:

1
2
3
4
5
6
.thumbnails li {
    display: inline-block;
    width: 250px;
    margin: 0.5em;
    vertical-align: top;
}

The general idea of inline-block is that the inside acts like a block, but the block itself is placed in regular flowing text, like an image. Each thumbnail is thus contained in a box, but the boxes all lie next to each other, and because of their equal widths, they flow into a grid. And since it’s functionally a line of text, you don’t have to work around any weird impact on the rest of the page like you had to do with floats.

Sure, this had some drawbacks. You couldn’t do anything with the leftover space, for example, so there was a risk of a big empty void on the right with pathological screen sizes. You still had the problem of breaking the grid with a wide cell. But at least it’s not floats.

One teeny problem: IE 6. It did technically support inline-block, but only on elements that were naturally inline — ones like <b> and <i>, not <li>. So, not ones you’d actually want (or think) to use inline-block on. Sigh.

Lucky for us, at some point an absolute genius discovered hasLayout, an internal optimization in IE that marks whether an element… uh… has… layout. Look, I don’t know. Basically it changes the rendering path for an element — making it differently buggy, like quirks mode on a per-element basis! The upshot is that the above works in IE 6 if you add a couple lines:

1
2
3
4
5
6
7
8
.thumbnails li {
    display: inline-block;
    width: 250px;
    margin: 0.5em;
    vertical-align: top;
    *zoom: 1;
    *display: inline;
}

The leading asterisks make the property invalid, so browsers should ignore the whole line… but for some reason I cannot begin to fathom, IE 6 ignores the asterisks and accepts the rest of the rule. (Almost any punctuation worked, including a hyphen or — my personal favorite — an underscore.) The zoom property is a Microsoft extension that scales stuff, with the side effect that it grants the mystical property of “layout” to the element as well. And display: inline should make each element spill its contents into one big line of text, but IE treats an inline element that has “layout” roughly like an inline-block.

And here we saw the true potential of CSS messes. Browser-specific rules, with deliberate bad syntax that one browser would ignore, to replicate an effect that still isn’t clearly described by what you’re writing. Entire tutorials written to explain how to accomplish something simple, like a grid, but have it actually work on most people’s browsers. You’d also see * html, html > /**/ body, and all kinds of other nonsense. Here’s a full list! And remember that “clearfix” hack from before? The full version, compatible with every browser, is a bit worse:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
.clearfix:after {
  visibility: hidden;
  display: block;
  font-size: 0;
  content: " ";
  clear: both;
  height: 0;
}
.clearfix { display: inline-block; }
/* start commented backslash hack \*/
* html .clearfix { height: 1%; }
.clearfix { display: block; }
/* close commented backslash hack */

Is it any wonder folks started groaning about CSS?

This was an era of blind copy/pasting in the frustrated hopes of making the damn thing work. Case in point: someone (I dug the original source up once but can’t find it now) had the bone-headed idea of always setting body { font-size: 62.5% } due to a combination of “relative units are good” and wanting to override the seemingly massive default browser font size of 16px (which, it turns out, is correct) and dealing with IE bugs. He walked it back a short time later, but the damage had been done, and now thousands of websites start off that way as a “best practice”. Which means if you want to change your browser’s default font size in either direction, you’re screwed — scale it down and a bunch of the Web becomes microscopic, scale it up and everything will still be much smaller than you’ve asked for, scale it up more to compensate and everything that actually respects your decision will be ginormous. At least we have better page zoom now, I guess.

Oh, and do remember: Stack Overflow didn’t exist yet. This stuff was passed around purely by word of mouth. If you were lucky, you knew about some of the websites about websites, like quirks mode and Eric Meyer’s website.

In fact, check out Meyer’s css/edge site for some wild examples of stuff folks were doing, even with just CSS 1, as far back as 2002. I still think complexspiral is pure genius, even though you could do it nowadays with opacity and just one image. The approach in raggedfloat wouldn’t get native support in CSS until a few years ago, with shape-outside! He also brought us CSS reset, eliminating differences between browsers’ default styles.

(I cannot understand how much of a CSS pioneer Eric Meyer is. When his young daughter Rebecca died six years ago, she was uniquely immortalized with her own CSS color name, rebeccapurple. That’s how highly the Web community thinks of him. Also I have to go cry a bit over that story now.)

The future arrives, gradually

Designers and developers were pushing the bounds of what browsers were capable of. Browsers were handling it all somewhat poorly. All the fixes and workarounds and libraries were arcane, brittle, error-prone, and/or heavy.

Clearly, browsers needed some new functionality. But just slopping something in wouldn’t help; Microsoft had done plenty of that, and it had mostly made a mess.

Several struggling attempts began. With the W3C’s head still squarely up its own ass — even explicitly rejecting proposed enhancements to HTML, in favor of snorting XML — some folks from (active) browser vendors Apple, Mozilla, and Opera decided to make their own clubhouse. WHATWG came into existed in June 2004, and they began work on HTML5. (It would end up defining error-handling very explicitly, which completely obviated the need for XHTML and eliminated a number of security concerns when working with arbitrary HTML. Also it gave us some new goodies, like native audio, video, and form controls for dates and colors and other stuff that had been clumsily handled by JavaScript-powered custom controls. And, um, still often are.)

Then there was CSS 3. I’m not sure when it started to exist. It emerged slowly, struggling, like a chick hatching from an egg and taking its damn sweet fucking time to actually get implemented anywhere.

I’m having to do a lot of educated guessing here, but I think it began with border-radius. Specifically, with -moz-border-radius. I don’t know when it was first introduced, but the Mozilla bug tracker has mentions of it as far back as 1999.

See, Firefox’s own UI is rendered with CSS. If Mozilla wanted to do something that couldn’t be done with CSS, they added a property of their own, prefixed with -moz- to indicate it was their own invention. And when there’s no real harm in doing so, they leave the property accessible to websites as well.

My guess, then, is that the push for CSS 3 really began when Firefox took off and designers discovered -moz-border-radius. Suddenly, built-in rounded corners were available! No more fucking around in Photoshop; you only needed to write a single line! Practically overnight, everything everywhere had its corners filed down.

And from there, things snowballed. Common problems were addressed one at a time by new CSS features, which were clustered together into a new CSS version: CSS 3. The big ones were solutions to the design problems mentioned before:

  • Rounded corners, provided by border-radius.
  • Gradients, provided by linear-gradient() and friends.
  • Multiple backgrounds, which weren’t exactly a pressing concern, but which turned out to make some other stuff easier.
  • Translucency, provided by opacity and colors with an alpha channel.
  • Box shadows.
  • Text shadows, which had been in CSS 2 but dropped in 2.1 and never implemented anyway.
  • Border images, so you could do even fancier things than mere rounded borders.
  • Transitions and animations, now doable with ease without needing jQuery (or any JS at all).
  • :nth-child(), which solved the alternating rows problem with pure CSS.
  • Transformations. Wait, what? This kinda leaked in from SVG, which browsers were also being expected to implement, and which is built heavily around transforms. The code was already there, so, hey, now we can rotate stuff with CSS! Couldn’t do that before. Cool.
  • Web fonts, which had been in CSS for some time but only ever implemented in IE and only with some goofy DRM-laden font format. Now we weren’t limited to the four bad fonts that ship with Windows and that no one else has!

These were pretty great! They didn’t solve any layout problems, but they did address aesthetic issues that designers had been clumsily working around by using loads of images and/or JavaScript. That meant less stuff to download and more text used instead of images, both of which were pretty good for the Web.

The grand irony is that all the stuff you could do with these features went out of style almost immediately, and now we’re back to flat rectangles again.

Browser prefixing hell

Alas! All was still not right with the world.

Several of these new gizmos were, I believe, initially developed by browser vendors and prefixed. Some later ones were designed by the CSS committee but implemented by browsers while the design was still in flux, and thus also prefixed.

So began prefix hell, which continues to this day.

Mozilla had -moz-border-radius, so when Safari implemented it, it was named -webkit-border-radius (“WebKit” being the name of Apple’s KHTML fork). Then the CSS 3 spec standardized it and called it just border-radius. That meant that if you wanted to use rounded borders, you actually needed to give three rules:

1
2
3
4
5
element {
    -moz-border-radius: 1em;
    -webkit-border-radius: 1em;
    border-radius: 1em;
}

The first two made the effect actually work in current browsers, and the last one was future-proofing: when browsers implemented the real rule and dropped the prefixed ones, it would take over.

You had to do this every fucking time, since CSS isn’t a programming language and has no macros or functions or the like. Sometimes Opera and IE would have their own implementations with -o- and -ms- prefixes, bringing the total to five copies. It got much worse with gradients; the syntax went through a number of major incompatible revisions, so you couldn’t even rely on copy/pasting and changing the property name!

And plenty of folks, well, fucked it up. I can’t blame them too much; I mean, this sucks. But enough pages used only the prefixed forms, and not the final form, that browsers had to keep supporting the prefixed form for longer than they would’ve liked to avoid breaking stuff. And if the prefixed form still works and it’s what you’re used to writing, then maybe you still won’t bother with the unprefixed one.

Worse, some people would only use the form that worked in their pet choice of browser. This got especially bad with the rise of mobile web browsers. The built-in browsers on iOS and Android are Safari (WebKit) and Chrome (originally WebKit, now a fork), so you only “needed” to use the -webkit- properties. Which made things difficult for Mozilla when it released Firefox for Android.

Hey, remember that whole debacle with IE 6? Here we are again! It was bad enough that Mozilla eventually decided to implement a number of -webkit- properties, which remain supported even in desktop Firefox to this day. The situation is goofy enough that Firefox now supports some effects only via these properties, like -webkit-text-stroke, which isn’t being standardized.

Even better, Chrome’s current forked engine is called Blink, so technically it shouldn’t be using -webkit- properties either. And yet, here we are. At least it’s not as bad as the user agent string mess.

Browser vendors have pretty much abandoned prefixing, now; instead they hide experimental features behind flags (so they’ll only work on the developer’s machine), and new features are theoretically designed to be smaller and easier to stabilize.

This mess was probably a huge motivating factor for the development of Sass and LESS, two languages that produce CSS. Or… two CSS preprocessors, maybe. They have very similar goals: both add variables, functions, and some form of macros to CSS, allowing you to eliminate a lot of the repetition and browser hacks and other nonsense from your stylesheets. Hell, this blog still uses SCSS, though its use has gradually decreased over time.

Flexbox

But then, like an angel descending from heaven… flexbox.

Flexbox has been around for a long time — allegedly it had partial support in Firefox 2, back in 2006! It went through several incompatible revisions and took ages to stabilize. Then IE took ages to implement it, and you don’t really want to rely on layout tools that only work for half your audience. It’s only relatively recently (2015? Later?) that flexbox has had sufficiently broad support to use safely. And I could swear I still run into folks whose current Safari doesn’t recognize it at all without prefixing, even though Safari supposedly dropped the prefixes five years ago…

Anyway, flexbox is a CSS implementation of a pretty common GUI layout tool: you have a parent with some children, and the parent has some amount of space available, and it gets divided automatically between the children. You know, it puts things next to each other.

The general idea is that the browser computes how much space the parent has available and the “initial size” of each child, figures out how much extra space there is, and distributes it according to the flexibleness of each child. Think of a toolbar: you might want each button to have a fixed size (a flex of 0), but want to add spacers that share any leftover space equally, so you’d give them a flex of 1.

Once that’s done, you have a number of quality-of-life options at your disposal, too: you can distribute the extra space between the children instead, you can tell the children to stretch to the same height or align them in various ways, and you can even have them wrap into multiple rows if they won’t all fit!

With this, we can take yet another crack at that thumbnail grid:

1
2
3
4
5
6
7
.thumbnail-grid {
    display: flex;
    flex-wrap: wrap;
}
.thumbnail-grid li {
    flex: 1 0 250px;
}

This is miraculous. I forgot all about inline-block overnight and mostly salivated over this until it was universally supported. It even expresses very clearly what I want.

…almost. It still has the problem that too-wide cells will break the grid, since it’s still a horizontal row wrapped onto several independent lines. It’s pretty damn cool, though, and solves a number of other layout problems. Surely this is good enough. Unless…?

I’d say mass adoption of flexbox marked the beginning of the modern era of CSS. But there was one lingering problem…

The slow, agonizing death of IE

IE 6 took a long, long, long time to go away. It didn’t drop below 10% market share (still a huge chunk) until early 2010 or so.

Firefox hit 1.0 at the end of 2004. IE 7 wasn’t released until two years later, it offered only modest improvements, it suffered from compatibility problems with stuff built for IE 6, and the IE 6 holdouts (many of whom were not Computer People) generally saw no reason to upgrade. Vista shipped with IE 7, but Vista was kind of a flop — I don’t believe it ever came close to overtaking XP, not in its entire lifetime.

Other factors included corporate IT policies, which often take the form of “never upgrade anything ever” — and often for good reason, as I heard endless tales of internal apps that only worked in IE 6 for all manner of horrifying reasons. Then there was the entirety of South Korea, which was legally required to use IE 6 because they’d enshrined in law some security requirements that could only be implemented with an IE 6 ActiveX control.

So if you maintained a website that was used — or worse, required — by people who worked for businesses or lived in other countries, you were pretty much stuck supporting IE 6. Folks making little personal tools and websites abandoned IE 6 compatibility early on and plastered their sites with increasingly obnoxious banners taunting anyone who dared show up using it… but if you were someone’s boss, why would you tell them it’s okay to drop 20% of your potential audience? Just work harder!

The tension grew over the years, as CSS became more capable and IE 6 remained an anchor. It still didn’t even understand PNG alpha without workarounds, and meanwhile we were starting to get more critical features like native video in HTML5. The workarounds grew messier, and the list of features you basically just couldn’t use grew longer. (I’d show you what my blog looks like in IE 6, but I don’t think it can even connect — the TLS stuff it supports is so ancient and broken that it’s been disabled on most servers!)

Shoutouts, by the way, to some folks on the YouTube team, who in July 2009 added a warning banner imploring IE 6 users to switch to anything else — without asking anyone for approval. “Within one month… over 10 percent of global IE6 traffic had dropped off.” Not all heroes wear capes.

I’d mark the beginning of the end as the day YouTube actually dropped IE 6 support — March 13, 2010, almost nine years after its release. I don’t know how much of a direct impact YouTube has on corporate users or the South Korean government, but a massive web company dropping an entire browser sends a pretty strong message.

There were other versions of IE, of course, and many of them were messy headaches in their own right. But each subsequent one became less of a pain, and nowadays you don’t even have to think too much about testing in IE (now Edge). Just in time for Microsoft to scrap their own rendering engine and turn their browser into a Chrome clone.

Now

CSS is pretty great now. You don’t need weird fucking hacks just to put things next to each other. Browser dev tools are built in, now, and are fucking amazing — Firefox has started specifically warning you when some CSS properties won’t take effect because of the values of others! Obscure implicit side effects like “stacking contexts” (whatever those are) can now be set explicitly, with properties like isolation: isolate.

In fact, let me just list everything that I can think of that you can do in CSS now. This isn’t a guide to all possible uses of styling, but if your CSS knowledge hasn’t been updated since 2008, I hope this whets your appetite. And this stuff is just CSS! So many things that used to be impossible or painful or require clumsy plugins are now natively supported — audio, video, custom drawing, 3D rendering… not to mention the vast ergonomic improvements to JavaScript.

Layout

A grid container can do pretty much anything tables can do, and more, including automatically determining how many columns will fit. It’s fucking amazing. More on that below.

A flexbox container lays out its children in a row or column, allowing each child to declare its “default” size and what proportion of leftover space it wants to consume. Flexboxes can wrap, rearrange children without changing source order, and align children in a number of ways.

Columns will pour text into, well, multiple columns.

The box-sizing property lets you opt into the IE box model on a per-element basis, for when you need an entire element to take up a fixed amount of space and need padding/borders to subtract from that.

display: contents dumps an element’s contents out into its parent, as if it weren’t there at all. display: flow-root is basically an automatic clearfix, only a decade too late.

width can now be set to min-content, max-content, or the fit-content() function for more flexible behavior.

white-space: pre-wrap preserves whitespace, but breaks lines where necessary to avoid overflow. Also useful is pre-line, which collapses sequences of spaces down to a single space, but preserves literal newlines.

text-overflow cuts off overflowing text with an ellipsis (or custom character) when it would overflow, rather than simply truncating it. Also specced is the ability to fade out the text, but this is as yet unimplemented.

shape-outside alters the shape used when wrapping text around a float. It can even use the alpha channel of an image as the shape.

resize gives an arbitrary element a resize handle (as long as it has overflow).

writing-mode sets the direction that text flows. If your design needs to work for multiple writing modes, a number of CSS properties that mention left/right/top/bottom have alternatives that describe directions in terms of the writing mode: inset-block and inset-inline for position, block-size and inline-size for width/height, border-block and border-inline for borders, and similar for padding and margins.

Aesthetics

Transitions smoothly interpolate a value whenever it changes, whether due to an effect like :hover or e.g. a class being added from JavaScript. Animations are similar, but play a predefined animation automatically. Both can use a number of different easing functions.

border-radius rounds off the corners of a box. The corners can all be different sizes, and can be circular or elliptical. The curve also applies to the border, background, and any box shadows.

Box shadows can be used for the obvious effect of casting a drop shadow. You can also use multiple shadows and inset shadows for a variety of clever effects.

text-shadow does what it says on the tin, though you can also stack several of them for a rough approximation of a text outline.

transform lets you apply an arbitrary matrix transformation to an element — that is, you can scale, rotate, skew, translate, and/or do perspective transform, all without affecting layout.

filter (distinct from the IE 6 one) offers a handful of specific visual filters you can apply to an element. Most of them affect color, but there’s also a blur() and a drop-shadow() (which, unlike box-shadow, applies to an element’s appearance rather than its containing box).

linear-gradient(), radial-gradient(), the new and less-supported conic-gradient(), and their repeating-* variants all produce gradient images and can be used anywhere in CSS that an image is expected, most commonly as a background-image.

scrollbar-color changes the scrollbar color, with the downside of reducing the scrollbar to a very simple thumb-and-track in current browsers.

background-size: cover and contain will scale a background image proportionally, either big enough to completely cover the element (even if cropped) or small enough to exactly fit inside it (even if it doesn’t cover the entire background).

object-fit is a similar idea but for non-background media, like <img>s. The related object-position is like background-position.

Multiple backgrounds are possible, which is especially useful with gradients — you can stack multiple gradients, other background images, and a solid color on the bottom.

text-decoration is fancier than it used to be; you can now set the color of the line and use several different kinds of lines, including dashed, dotted, and wavy.

CSS counters can be used to number arbitrary elements in an arbitrary way, exposing the counting ability of <ol> to any set of elements you want.

The ::marker pseudo-element allows you to style a list item’s marker box, or even replace it outright with a custom counter. Browser support is spotty, but improving. Similarly, the @counter-style at-rule implements an entirely new counter style (like 1 2 3, i ii iii, A B C, etc.) which you can then use anywhere, though only Firefox supports it so far.

image-set() provides a list of candidate images and lets the browser choose the most appropriate one based on the pixel density of the user’s screen.

@font-face defines a font that can be downloaded, though you can avoid figuring out how to use it correctly by using Google Fonts.

pointer-events: none makes an element ignore the mouse entirely; it can’t be hovered, and clicks will go straight through it to the element below.

image-rendering can force an image to be resized nearest-neighbor rather than interpolated, though browser support is still spotty and you may need to also include some vendor-specific properties.

clip-path crops an element to an arbitrary shape. There’s also mask for arbitrary alpha masking, but browser support is spotty and hoo boy is this one complicated.

Syntax and misc

@supports lets you explicitly write different CSS depending on what the browser supports, though it’s nowhere near as useful nowadays as it would’ve been in 2004.

A > B selects immediate children. A + B selects siblings. A ~ B selects immediate (element) siblings. Square brackets can do a bunch of stuff to select based on attributes; most obvious is input[type=checkbox], though you can also do interesting things with matching parts of <a href>.

There are a whole bunch of pseudo-classes now. Many of them are for form elements: :enabled and :disabled; :checked and :indeterminate (also apply to radio and <option>); :required and :optional; :read-write and :read-only; :in-range/:out-of-range and :valid/:invalid (for use with HTML5 client-side form validation); :focus and :focus-within; and :default (which selects the default form button and any pre-selected checkboxes, radio buttons, and <option>s).

For targeting specific elements within a set of siblings, we have: :first-child, :last-child, and :only-child; :first-of-type, :last-of-type, and :only-of-type (where “type” means tag name); and :nth-child(), :nth-last-child(), :nth-of-type(), and :nth-last-of-type() (to select every second, third, etc. element).

:not() inverts a selector. :empty selects elements with no children and no text. :target selects the element jumped to with a URL fragment (e.g. if the address bar shows index.html#foo, this selects the element whose ID is foo).

::before and ::after should have two colons now, to indicate that they create pseudo-elements rather than merely scoping the selector they’re attached to. ::selection customizes how selected text appears; ::placeholder customizes how placeholder text (in text fields) appears.

Media queries do just a whole bunch of stuff so your page can adapt based on how it’s being viewed. The prefers-color-scheme media query tells you if the user’s system is set to a light or dark theme, so you can adjust accordingly without having to ask.

You can write translucent colors as #rrggbbaa or #rgba, as well as using the rgba() and hsla() functions.

Angles can be described as fractions of a full circle with the turn unit. Of course, deg and rad (and grad) are also available.

CSS variables (officially, “custom properties”) let you specify arbitrary named values that can be used anywhere a value would appear. You can use this to reduce the amount of CSS fiddling needs doing in JavaScript (e.g., recolor a complex part of a page by setting a CSS variable instead of manually adjusting a number of properties), or have a generic component that reacts to variables set by an ancestor.

calc() computes an arbitrary expression and updates automatically (though it’s somewhat obviated by box-sizing).

The vw, vh, vmin, and vmax units let you specify lengths as a fraction of the viewport’s width or height, or whichever of the two is bigger/smaller.


Phew! I’m sure I’m forgetting plenty and folks will have even longer lists of interesting tidbits in the comments. Thanks for saving me some effort! Now I can stop browsing MDN and do this final fun part.

State of the art thumbnail grid

At long last, we arrive at the final and objectively correct way to construct a thumbnail grid: using CSS grid. You can tell this is the right thing to use because it has “grid” in the name. Modern CSS features are pretty great about letting you say the thing you want and having it happen, rather than trying to coax it into happening implicitly via voodoo.

And it is oh so simple:

1
2
3
4
.thumbnail-grid {
    display: grid;
    grid: auto-flow / repeat(auto-fit, minmax(250px, 1fr));
}

Done! That gives you a grid. You have myriad other twiddles to play with, just as with flexbox, but that’s the basic idea. You don’t even need to style the elements themselves; most of the layout work is done in the container.

The grid shorthand property looks a little intimidating, but only because it’s so flexible. It’s saying: fill the grid one row at a time, generating as many rows as necessary; make as many 250px columns as will fit, and share any leftover space between them equally.

CSS grids are also handy for laying out <dl>s, something that’s historically been a massive pain to make work — a <dl> contains any number of <dt>s followed by any number of <dd>s (including zero), and the only way to style this until grid was to float the <dt>s, which meant they had to have a fixed width. Now you can just tell the <dt>s to go in the first column and <dd>s to go in the second, and grid will take care of the rest.

And laying out your page? That whole sidebar thing? Check out how easy that is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
body {
    display: grid;
    grid-template:
        "header         header          header"
        "left-sidebar   main-content    right-sidebar"
        "footer         footer          footer"
        / 1fr           6fr             1fr
    ;
}
body > header {
    grid-area: header;
}
#left-sidebar {
    grid-area: left-sidebar;
}
/* ... etc ... */

Done. Easy. It doesn’t matter what order the parts appear in the markup, either.

On the other hand

The web is still a little bit of a disaster. A lot of folks don’t even know that flexbox and grid are supported almost universally now; but given how long it took to get from early spec work to broad implementation, I can’t really blame them. I saw a brand new little site just yesterday that consisted mostly of a huge list of “thumbnails” of various widths, and it used floats! Not even inline-block! I don’t know how we managed to teach everyone about all the hacks required to make that work, but somehow haven’t gotten the word out about flexbox.

But far worse than that: I still regularly encounter sites that do their entire page layout with JavaScript. If you use uMatrix, your first experience is with a pile of text overlapping a pile of other text. Surely this is a step backwards? What are you possibly doing that your header and sidebar can only be laid out correctly by executing code? It’s not like the page loads with no CSS — nothing in plain HTML will overlap by default! You have to tell it to do that!

And then there’s the mobile web, which despite everyone’s good intentions, has kind of turned out to be a failure. The idea was that you could use CSS media queries to fit your normal site on a phone screen, but instead, most major sites have entirely separate mobile versions. Which means that either the mobile site is missing a bunch of important features and I’ll have to awkwardly navigate that on my phone anyway, or the desktop site is full of crap that nobody actually needs.

(Meanwhile, Google’s own Android versions of Docs/Sheets/etc. have, like, 5% of the features of the Web versions? Not sure what to make of that.)

Hmm. Strongly considering writing something that goes more into detail about improvements to CSS since the Firefox 3 era, similar to the one I wrote for JavaScript. But this post is long enough.

Some futures that never were

I don’t know what’s coming next in CSS, especially now that flexbox and grid have solved all our problems. I’m vaguely aware of some work being done on more extensive math support, and possibly some functions for altering colors like in Sass. There’s a painting API that lets you generate backgrounds on the fly with JavaScript using the canvas API, which is… quite something. Apparently it’s now in spec that you can use attr() (which evaluates to the value of an HTML attribute) as the value for any property, which seems cool and might even let you implement HTML tables entirely in CSS, but you could do the same thing with variables. I mean, um, custom properties. I’m more excited about :is(), which matches any of a list of selectors, and subgrid, which lets you add some nesting to a grid but keep grandchildren still aligned to it.

Much easier is to list some things that were the future, but fizzled out.

  • display: run-in has been part of CSS since version 2 (way back in ‘98), but it’s basically unsupported. The idea is that a “run-in” box is inserted, inline, into the next block, so this:

    1
    2
    3
    <h2 style="display: run-in;">Title</h2>
    <p>Paragraph</p>
    <p>Paragraph</p>
    

    displays like this:

    Title Paragraph

    Paragraph

    And, ah, hm, I’m starting to see why it’s unsupported. It used to exist in WebKit, but was apparently so unworkable as to be removed six years ago.

  • Alternate stylesheets” were popular in the early 00s, at least on a few of my friends’ websites. The idea was that you could list more than one stylesheet for your site (presumably for different themes), and the browser would give the user a list of them. Alas, that list was always squirrelled away in a menu with no obvious indication of when it was actually populated, so in the end, everyone who wanted multiple themes just implemented an in-page theme switcher themselves.

    This feature is still supported, but apparently Chrome never bothered implementing it, so it’s effectively dead.

  • More generally, the original CSS spec clearly expects users to be able to write their own CSS for a website — right in paragraph 2 it says

    …the reader may have a personal style sheet to adjust for human or technological handicaps.

    Hey, that sounds cool. But it never materialized as a browser feature. Firefox has userContent.css and some URL selectors for writing per-site rules, but that’s relatively obscure.

    Still, there’s clearly demand for the concept, as evidenced by the popularity of the Stylish extension — which does just this. (Too bad it was bought by some chucklefucks who started using it to suck up browser data to sell to advertisers. Use Stylus instead.)

  • A common problem (well, for me) is that of styling the label for a checkbox, depending on its state. Styling the checkbox itself is easy enough with the :checked pseudo-selector. But if you arrange a checkbox and its label in the obvious way:

    1
    <label><input type="checkbox"> Description of what this does</label>
    

    …then CSS has no way to target either the <label> element or the text node. jQuery’s (originally custom) selector engine offered a custom :has() pseudo-class, which could be used to express this:

    1
    2
    3
    4
    /* checkbox label turns bold when checked */
    label:has(input:checked) {
        font-weight: bold;
    }
    

    Early CSS 3 selector discussions seemingly wanted to avoid this, I guess for performance reasons? The somewhat novel alternative was to write out the entire selector, but be able to alter which part of it the rules affected with a “subject” indicator. At first this was a pseudo-class:

    1
    2
    3
    label:subject input:checked {
        font-weight: bold;
    }
    

    Then later, they introduced a ! prefix instead:

    1
    2
    3
    !label input:checked {
        font-weight: bold;
    }
    

    Thankfully, this was decided to be a bad idea, so the current specced way to do this is… :has()! Unfortunately, it’s only allowed when querying from JavaScript, not in a live stylesheet, and nothing implements it anyway. 20 years and I’m still waiting for a way to style checkbox labels.

  • <style scoped> was an attribute that would’ve made a <style> element’s CSS rules only apply to other elements within its immediate parent, meaning you could drop in arbitrary (possibly user-written) CSS without any risk of affecting the rest of the page. Alas, this was quietly dropped some time ago, with shadow DOM suggested as a wildly inappropriate replacement.

  • I seem to recall that when I first heard about Web components, they were templates you could use to reduce duplication in pure HTML? But I can’t find any trace of that concept now, and the current implementations require JavaScript to define them, so there’s nothing declarative linking a new tag to its implementation. Which makes them completely unusable for anything that doesn’t have a compelling reason to rely on JS. Alas.

  • <blink> and <marquee>. RIP. Though both can be easily replicated with CSS animations.

That’s it

You’re still here? It’s over. Go home.

And maybe push back against Blink monoculture and use Firefox, including on your phone, unless for some reason you use an iPhone, which forbids other browser engines, which is far worse than anything Microsoft ever did, but we just kinda accept it for some reason.

Storing Encrypted Credentials In Git

Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/

We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history.

Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution.

I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere.

This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code.

Such a repo would look like this:

project
└─── production
|   |   application.properites
|   |   keystore.jks
└─── staging
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client1
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client2
|   |   application.properites
|   |   keystore.jks

Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows.

You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that:

secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.properties filter=git-crypt diff=git-crypt
*.jks filter=git-crypt diff=git-crypt

And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f so that the existing files are actually encrypted.

You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure.

git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access.

The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values.

The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow.

The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog.

Friday Squid Blogging: Do Cephalopods Contain Alien DNA?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/friday_squid_bl_627.html

Maybe not DNA, but biological somethings.

Cause of Cambrian explosion — Terrestrial or Cosmic?“:

Abstract: We review the salient evidence consistent with or predicted by the Hoyle-Wickramasinghe (H-W) thesis of Cometary (Cosmic) Biology. Much of this physical and biological evidence is multifactorial. One particular focus are the recent studies which date the emergence of the complex retroviruses of vertebrate lines at or just before the Cambrian Explosion of ~500 Ma. Such viruses are known to be plausibly associated with major evolutionary genomic processes. We believe this coincidence is not fortuitous but is consistent with a key prediction of H-W theory whereby major extinction-diversification evolutionary boundaries coincide with virus-bearing cometary-bolide bombardment events. A second focus is the remarkable evolution of intelligent complexity (Cephalopods) culminating in the emergence of the Octopus. A third focus concerns the micro-organism fossil evidence contained within meteorites as well as the detection in the upper atmosphere of apparent incoming life-bearing particles from space. In our view the totality of the multifactorial data and critical analyses assembled by Fred Hoyle, Chandra Wickramasinghe and their many colleagues since the 1960s leads to a very plausible conclusion — life may have been seeded here on Earth by life-bearing comets as soon as conditions on Earth allowed it to flourish (about or just before 4.1 Billion years ago); and living organisms such as space-resistant and space-hardy bacteria, viruses, more complex eukaryotic cells, fertilised ova and seeds have been continuously delivered ever since to Earth so being one important driver of further terrestrial evolution which has resulted in considerable genetic diversity and which has led to the emergence of mankind.

Two commentaries.

This is almost certainly not true.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

DNS over HTTPS in Firefox

Post Syndicated from corbet original https://lwn.net/Articles/756262/rss

The Mozilla blog has an
article
describing the addition of DNS over HTTPS (DoH) as an optional
feature in the Firefox browser. “DoH support has been added to
Firefox 62 to improve the way Firefox interacts with DNS. DoH uses
encrypted networking to obtain DNS information from a server that is
configured within Firefox. This means that DNS requests sent to the DoH
cloud server are encrypted while old style DNS requests are not
protected.
” The configured server is hosted by Cloudflare, which
has posted this
privacy agreement
about the service.

Hiring a Director of Sales

Post Syndicated from Yev original https://www.backblaze.com/blog/hiring-a-director-of-sales/

Backblaze is hiring a Director of Sales. This is a critical role for Backblaze as we continue to grow the team. We need a strong leader who has experience in scaling a sales team and who has an excellent track record for exceeding goals by selling Software as a Service (SaaS) solutions. In addition, this leader will need to be highly motivated, as well as able to create and develop a highly-motivated, success oriented sales team that has fun and enjoys what they do.

The History of Backblaze from our CEO
In 2007, after a friend’s computer crash caused her some suffering, we realized that with every photo, video, song, and document going digital, everyone would eventually lose all of their information. Five of us quit our jobs to start a company with the goal of making it easy for people to back up their data.

Like many startups, for a while we worked out of a co-founder’s one-bedroom apartment. Unlike most startups, we made an explicit agreement not to raise funding during the first year. We would then touch base every six months and decide whether to raise or not. We wanted to focus on building the company and the product, not on pitching and slide decks. And critically, we wanted to build a culture that understood money comes from customers, not the magical VC giving tree. Over the course of 5 years we built a profitable, multi-million dollar revenue business — and only then did we raise a VC round.

Fast forward 10 years later and our world looks quite different. You’ll have some fantastic assets to work with:

  • A brand millions recognize for openness, ease-of-use, and affordability.
  • A computer backup service that stores over 500 petabytes of data, has recovered over 30 billion files for hundreds of thousands of paying customers — most of whom self-identify as being the people that find and recommend technology products to their friends.
  • Our B2 service that provides the lowest cost cloud storage on the planet at 1/4th the price Amazon, Google or Microsoft charges. While being a newer product on the market, it already has over 100,000 IT and developers signed up as well as an ecosystem building up around it.
  • A growing, profitable and cash-flow positive company.
  • And last, but most definitely not least: a great sales team.

You might be saying, “sounds like you’ve got this under control — why do you need me?” Don’t be misled. We need you. Here’s why:

  • We have a great team, but we are in the process of expanding and we need to develop a structure that will easily scale and provide the most success to drive revenue.
  • We just launched our outbound sales efforts and we need someone to help develop that into a fully successful program that’s building a strong pipeline and closing business.
  • We need someone to work with the marketing department and figure out how to generate more inbound opportunities that the sales team can follow up on and close.
  • We need someone who will work closely in developing the skills of our current sales team and build a path for career growth and advancement.
  • We want someone to manage our Customer Success program.

So that’s a bit about us. What are we looking for in you?

Experience: As a sales leader, you will strategically build and drive the territory’s sales pipeline by assembling and leading a skilled team of sales professionals. This leader should be familiar with generating, developing and closing software subscription (SaaS) opportunities. We are looking for a self-starter who can manage a team and make an immediate impact of selling our Backup and Cloud Storage solutions. In this role, the sales leader will work closely with the VP of Sales, marketing staff, and service staff to develop and implement specific strategic plans to achieve and exceed revenue targets, including new business acquisition as well as build out our customer success program.

Leadership: We have an experienced team who’s brought us to where we are today. You need to have the people and management skills to get them excited about working with you. You need to be a strong leader and compassionate about developing and supporting your team.

Data driven and creative: The data has to show something makes sense before we scale it up. However, without creativity, it’s easy to say “the data shows it’s impossible” or to find a local maximum. Whether it’s deciding how to scale the team, figuring out what our outbound sales efforts should look like or putting a plan in place to develop the team for career growth, we’ve seen a bit of creativity get us places a few extra dollars couldn’t.

Jive with our culture: Strong leaders affect culture and the person we hire for this role may well shape, not only fit into, ours. But to shape the culture you have to be accepted by the organism, which means a certain set of shared values. We default to openness with our team, our customers, and everyone if possible. We love initiative — without arrogance or dictatorship. We work to create a place people enjoy showing up to work. That doesn’t mean ping pong tables and foosball (though we do try to have perks & fun), but it means people are friendly, non-political, working to build a good service but also a good place to work.

Do the work: Ideas and strategy are critical, but good execution makes them happen. We’re looking for someone who can help the team execute both from the perspective of being capable of guiding and organizing, but also someone who is hands-on themselves.

Additional Responsibilities needed for this role:

  • Recruit, coach, mentor, manage and lead a team of sales professionals to achieve yearly sales targets. This includes closing new business and expanding upon existing clientele.
  • Expand the customer success program to provide the best customer experience possible resulting in upsell opportunities and a high retention rate.
  • Develop effective sales strategies and deliver compelling product demonstrations and sales pitches.
  • Acquire and develop the appropriate sales tools to make the team efficient in their daily work flow.
  • Apply a thorough understanding of the marketplace, industry trends, funding developments, and products to all management activities and strategic sales decisions.
  • Ensure that sales department operations function smoothly, with the goal of facilitating sales and/or closings; operational responsibilities include accurate pipeline reporting and sales forecasts.
  • This position will report directly to the VP of Sales and will be staffed in our headquarters in San Mateo, CA.

Requirements:

  • 7 – 10+ years of successful sales leadership experience as measured by sales performance against goals.
    Experience in developing skill sets and providing career growth and opportunities through advancement of team members.
  • Background in selling SaaS technologies with a strong track record of success.
  • Strong presentation and communication skills.
  • Must be able to travel occasionally nationwide.
  • BA/BS degree required

Think you want to join us on this adventure?
Send an email to jobscontact@backblaze.com with the subject “Director of Sales.” (Recruiters and agencies, please don’t email us.) Include a resume and answer these two questions:

  1. How would you approach evaluating the current sales team and what is your process for developing a growth strategy to scale the team?
  2. What are the goals you would set for yourself in the 3 month and 1-year timeframes?

Thank you for taking the time to read this and I hope that this sounds like the opportunity for which you’ve been waiting.

Backblaze is an Equal Opportunity Employer.

The post Hiring a Director of Sales appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Randomly generated, thermal-printed comics

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/random-comic-strip-generation-vomit-comic-robot/

Python code creates curious, wordless comic strips at random, spewing them from the thermal printer mouth of a laser-cut body reminiscent of Disney Pixar’s WALL-E: meet the Vomit Comic Robot!

The age of the thermal printer!

Thermal printers allow you to instantly print photos, data, and text using a few lines of code, with no need for ink. More and more makers are using this handy, low-maintenance bit of kit for truly creative projects, from Pierre Muth’s tiny PolaPi-Zero camera to the sound-printing Waves project by Eunice Lee, Matthew Zhang, and Bomani McClendon (and our own Secret Santa Babbage).

Vomiting robots

Interaction designer and developer Cadin Batrack, whose background is in game design and interactivity, has built the Vomit Comic Robot, which creates “one-of-a-kind comics on demand by processing hand-drawn images through a custom software algorithm.”

The robot is made up of a Raspberry Pi 3, a USB thermal printer, and a handful of LEDs.

Comic Vomit Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

At the press of a button, Processing code selects one of a set of Cadin’s hand-drawn empty comic grids and then randomly picks images from a library to fill in the gaps.

Vomit Comic Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

Each image is associated with data that allows the code to fit it correctly into the available panels. Cadin says about the concept behing his build:

Although images are selected and placed randomly, the comic panel format suggests relationships between elements. Our minds create a story where there is none in an attempt to explain visuals created by a non-intelligent machine.

The Raspberry Pi saves the final image as a high-resolution PNG file (so that Cadin can sell prints on thick paper via Etsy), and a Python script sends it to be vomited up by the thermal printer.

Comic Vomit Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

For more about the Vomit Comic Robot, check out Cadin’s blog. If you want to recreate it, you can find the info you need in the Imgur album he has put together.

We ❤ cute robots

We have a soft spot for cute robots here at Pi Towers, and of course we make no exception for the Vomit Comic Robot. If, like us, you’re a fan of adorable bots, check out Mira, the tiny interactive robot by Alonso Martinez, and Peeqo, the GIF bot by Abhishek Singh.

Mira Alfonso Martinez Raspberry Pi

The post Randomly generated, thermal-printed comics appeared first on Raspberry Pi.

A set of Git security releases

Post Syndicated from corbet original https://lwn.net/Articles/755935/rss

Git versions v2.17.1, v2.13.7, v2.14.4, v2.15.2 and v2.16.4 have all been
released with fixes to a couple of security issues. The nastier of the two
(CVE-2018-11235) enables arbitrary code execution controlled by a hostile
repository. See this
Microsoft blog entry
for more details — after updating.

Getting Rid of Your Mac? Here’s How to Securely Erase a Hard Drive or SSD

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/how-to-wipe-a-mac-hard-drive/

erasing a hard drive and a solid state drive

What do I do with a Mac that still has personal data on it? Do I take out the disk drive and smash it? Do I sweep it with a really strong magnet? Is there a difference in how I handle a hard drive (HDD) versus a solid-state drive (SSD)? Well, taking a sledgehammer or projectile weapon to your old machine is certainly one way to make the data irretrievable, and it can be enormously cathartic as long as you follow appropriate safety and disposal protocols. But there are far less destructive ways to make sure your data is gone for good. Let me introduce you to secure erasing.

Which Type of Drive Do You Have?

Before we start, you need to know whether you have a HDD or a SSD. To find out, or at least to make sure, you click on the Apple menu and select “About this Mac.” Once there, select the “Storage” tab to see which type of drive is in your system.

The first example, below, shows a SATA Disk (HDD) in the system.

SATA HDD

In the next case, we see we have a Solid State SATA Drive (SSD), plus a Mac SuperDrive.

Mac storage dialog showing SSD

The third screen shot shows an SSD, as well. In this case it’s called “Flash Storage.”

Flash Storage

Make Sure You Have a Backup

Before you get started, you’ll want to make sure that any important data on your hard drive has moved somewhere else. OS X’s built-in Time Machine backup software is a good start, especially when paired with Backblaze. You can learn more about using Time Machine in our Mac Backup Guide.

With a local backup copy in hand and secure cloud storage, you know your data is always safe no matter what happens.

Once you’ve verified your data is backed up, roll up your sleeves and get to work. The key is OS X Recovery — a special part of the Mac operating system since OS X 10.7 “Lion.”

How to Wipe a Mac Hard Disk Drive (HDD)

NOTE: If you’re interested in wiping an SSD, see below.

    1. Make sure your Mac is turned off.
    2. Press the power button.
    3. Immediately hold down the command and R keys.
    4. Wait until the Apple logo appears.
    5. Select “Disk Utility” from the OS X Utilities list. Click Continue.
    6. Select the disk you’d like to erase by clicking on it in the sidebar.
    7. Click the Erase button.
    8. Click the Security Options button.
    9. The Security Options window includes a slider that enables you to determine how thoroughly you want to erase your hard drive.

There are four notches to that Security Options slider. “Fastest” is quick but insecure — data could potentially be rebuilt using a file recovery app. Moving that slider to the right introduces progressively more secure erasing. Disk Utility’s most secure level erases the information used to access the files on your disk, then writes zeroes across the disk surface seven times to help remove any trace of what was there. This setting conforms to the DoD 5220.22-M specification.

  1. Once you’ve selected the level of secure erasing you’re comfortable with, click the OK button.
  2. Click the Erase button to begin. Bear in mind that the more secure method you select, the longer it will take. The most secure methods can add hours to the process.

Once it’s done, the Mac’s hard drive will be clean as a whistle and ready for its next adventure: a fresh installation of OS X, being donated to a relative or a local charity, or just sent to an e-waste facility. Of course you can still drill a hole in your disk or smash it with a sledgehammer if it makes you happy, but now you know how to wipe the data from your old computer with much less ruckus.

The above instructions apply to older Macintoshes with HDDs. What do you do if you have an SSD?

Securely Erasing SSDs, and Why Not To

Most new Macs ship with solid state drives (SSDs). Only the iMac and Mac mini ship with regular hard drives anymore, and even those are available in pure SSD variants if you want.

If your Mac comes equipped with an SSD, Apple’s Disk Utility software won’t actually let you zero the hard drive.

Wait, what?

In a tech note posted to Apple’s own online knowledgebase, Apple explains that you don’t need to securely erase your Mac’s SSD:

With an SSD drive, Secure Erase and Erasing Free Space are not available in Disk Utility. These options are not needed for an SSD drive because a standard erase makes it difficult to recover data from an SSD.

In fact, some folks will tell you not to zero out the data on an SSD, since it can cause wear and tear on the memory cells that, over time, can affect its reliability. I don’t think that’s nearly as big an issue as it used to be — SSD reliability and longevity has improved.

If “Standard Erase” doesn’t quite make you feel comfortable that your data can’t be recovered, there are a couple of options.

FileVault Keeps Your Data Safe

One way to make sure that your SSD’s data remains secure is to use FileVault. FileVault is whole-disk encryption for the Mac. With FileVault engaged, you need a password to access the information on your hard drive. Without it, that data is encrypted.

There’s one potential downside of FileVault — if you lose your password or the encryption key, you’re screwed: You’re not getting your data back any time soon. Based on my experience working at a Mac repair shop, losing a FileVault key happens more frequently than it should.

When you first set up a new Mac, you’re given the option of turning FileVault on. If you don’t do it then, you can turn on FileVault at any time by clicking on your Mac’s System Preferences, clicking on Security & Privacy, and clicking on the FileVault tab. Be warned, however, that the initial encryption process can take hours, as will decryption if you ever need to turn FileVault off.

With FileVault turned on, you can restart your Mac into its Recovery System (by restarting the Mac while holding down the command and R keys) and erase the hard drive using Disk Utility, once you’ve unlocked it (by selecting the disk, clicking the File menu, and clicking Unlock). That deletes the FileVault key, which means any data on the drive is useless.

FileVault doesn’t impact the performance of most modern Macs, though I’d suggest only using it if your Mac has an SSD, not a conventional hard disk drive.

Securely Erasing Free Space on Your SSD

If you don’t want to take Apple’s word for it, if you’re not using FileVault, or if you just want to, there is a way to securely erase free space on your SSD. It’s a little more involved but it works.

Before we get into the nitty-gritty, let me state for the record that this really isn’t necessary to do, which is why Apple’s made it so hard to do. But if you’re set on it, you’ll need to use Apple’s Terminal app. Terminal provides you with command line interface access to the OS X operating system. Terminal lives in the Utilities folder, but you can access Terminal from the Mac’s Recovery System, as well. Once your Mac has booted into the Recovery partition, click the Utilities menu and select Terminal to launch it.

From a Terminal command line, type:

diskutil secureErase freespace VALUE /Volumes/DRIVE

That tells your Mac to securely erase the free space on your SSD. You’ll need to change VALUE to a number between 0 and 4. 0 is a single-pass run of zeroes; 1 is a single-pass run of random numbers; 2 is a 7-pass erase; 3 is a 35-pass erase; and 4 is a 3-pass erase. DRIVE should be changed to the name of your hard drive. To run a 7-pass erase of your SSD drive in “JohnB-Macbook”, you would enter the following:

diskutil secureErase freespace 2 /Volumes/JohnB-Macbook

And remember, if you used a space in the name of your Mac’s hard drive, you need to insert a leading backslash before the space. For example, to run a 35-pass erase on a hard drive called “Macintosh HD” you enter the following:

diskutil secureErase freespace 3 /Volumes/Macintosh\ HD

Something to remember is that the more extensive the erase procedure, the longer it will take.

When Erasing is Not Enough — How to Destroy a Drive

If you absolutely, positively need to be sure that all the data on a drive is irretrievable, see this Scientific American article (with contributions by Gleb Budman, Backblaze CEO), How to Destroy a Hard Drive — Permanently.

The post Getting Rid of Your Mac? Here’s How to Securely Erase a Hard Drive or SSD appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Recording lost seconds with the Augenblick blink camera

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/augenblick-camera/

Warning: a GIF used in today’s blog contains flashing images.

Students at the University of Bremen, Germany, have built a wearable camera that records the seconds of vision lost when you blink. Augenblick uses a Raspberry Pi Zero and Camera Module alongside muscle sensors to record footage whenever you close your eyes, producing a rather disjointed film of the sights you miss out on.

Augenblick blink camera recording using a Raspberry Pi Zero

Blink and you’ll miss it

The average person blinks up to five times a minute, with each blink lasting 0.5 to 0.8 seconds. These half-seconds add up to about 30 minutes a day. What sights are we losing during these minutes? That is the question asked by students Manasse Pinsuwan and René Henrich when they set out to design Augenblick.

Blinking is a highly invasive mechanism for our eyesight. Every day we close our eyes thousands of times without noticing it. Our mind manages to never let us wonder what exactly happens in the moments that we miss.

Capturing lost moments

For Augenblick, the wearer sticks MyoWare Muscle Sensor pads to their face, and these detect the electrical impulses that trigger blinking.

Augenblick blink camera recording using a Raspberry Pi Zero

Two pads are applied over the orbicularis oculi muscle that forms a ring around the eye socket, while the third pad is attached to the cheek as a neutral point.

Biology fact: there are two muscles responsible for blinking. The orbicularis oculi muscle closes the eye, while the levator palpebrae superioris muscle opens it — and yes, they both sound like the names of Harry Potter spells.

The sensor is read 25 times a second. Whenever it detects that the orbicularis oculi is active, the Camera Module records video footage.

Augenblick blink recording using a Raspberry Pi Zero

Pressing a button on the side of the Augenblick glasses set the code running. An LED lights up whenever the camera is recording and also serves to confirm the correct placement of the sensor pads.

Augenblick blink camera recording using a Raspberry Pi Zero

The Pi Zero saves the footage so that it can be stitched together later to form a continuous, if disjointed, film.

Learn more about the Augenblick blink camera

You can find more information on the conception, design, and build process of Augenblick here in German, with a shorter explanation including lots of photos here in English.

And if you’re keen to recreate this project, our free project resource for a wearable Pi Zero time-lapse camera will come in handy as a starting point.

The post Recording lost seconds with the Augenblick blink camera appeared first on Raspberry Pi.

Security and Human Behavior (SHB 2018)

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/security_and_hu_7.html

I’m at Carnegie Mellon University, at the eleventh Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions — all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating conference of my year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks. (Ross also maintains a good webpage of psychology and security resources.)

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.

Next year, I’ll be hosting the event at Harvard.

Welcome Jack — Data Center Tech

Post Syndicated from Yev original https://www.backblaze.com/blog/welcome-jack-data-center-tech/

As we shoot way past 500 petabytes of data stored, we need a lot of helping hands in the data center to keep those hard drives spinning! We’ve been hiring quite a lot, and our latest addition is Jack. Lets learn a bit more about him, shall we?

What is your Backblaze Title?
Data Center Tech

Where are you originally from?
Walnut Creek, CA until 7th grade when the family moved to Durango, Colorado.

What attracted you to Backblaze?
I had heard about how cool the Backblaze community is and have always been fascinated by technology.

What do you expect to learn while being at Backblaze?
I expect to learn a lot about how our data centers run and all of the hardware behind it.

Where else have you worked?
Garrhs HVAC as an HVAC Installer and then Durango Electrical as a Low Volt Technician.

Where did you go to school?
Durango High School and then Montana State University.

What’s your dream job?
I would love to be a driver for the Audi Sport. Race cars are so much fun!

Favorite place you’ve traveled?
Iceland has definitely been my favorite so far.

Favorite hobby?
Video games.

Of what achievement are you most proud?
Getting my Eagle Scout badge was a tough, but rewarding experience that I will always cherish.

Star Trek or Star Wars?
Star Wars.

Coke or Pepsi?
Coke…I know, it’s bad.

Favorite food?
Thai food.

Why do you like certain things?
I tend to warm up to things the more time I spend around them, although I never really know until it happens.

Anything else you’d like to tell us?
I’m a friendly car guy who will always be in love with my European cars and I really enjoy the Backblaze community!

We’re happy you joined us Out West! Welcome aboard Jack!

The post Welcome Jack — Data Center Tech appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Enchanting images with Inky Lines, a Pi‑powered polargraph

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/enchanting-images-inky-lines-pi-powered-polargraph/

A hanging plotter, also known as a polar plotter or polargraph, is a machine for drawing images on a vertical surface. It does so by using motors to control the length of two cords that form a V shape, supporting a pen where they meet. We’ve featured one on this blog before: Norbert “HomoFaciens” Heinz’s video is a wonderfully clear introduction to how a polargraph works and what you have to consider when you’re putting one together.

Today, we look at Inky Lines, by John Proudlock. With it, John is creating a series of captivating and beautiful pieces, and with his most recent work, each rendering of an image is unique.

The Inky Lines plotter draws a flock of seagulls in blue ink on white paper. The print head is suspended near the bottom left corner of the image, as the pen inks the wing of a gull

An evolving project

The project isn’t new – John has been working on it for at least a couple of years – but it is constantly evolving. When we first spotted it, John had just implemented code to allow the plotter to produce mesmeric, spiralling patterns.

A blue spiral pattern featuring overlapping "bubbles"
A dense pink spiral pattern, featuring concentric circles and reminiscent of a mandala
A blue spirograph-type pattern formed of large overlapping squares, each offset from its neighbour by a few degrees, producing a four-spiral-armed "galaxy" shape where lines overlap. The plotter's print head is visible in a corner of the image

But we’re skipping ahead. Let’s go back to the beginning.

From pixels to motor movements

John starts by providing an image, usually no more than 100 pixels wide, to a Raspberry Pi. Custom software that he wrote evaluates the darkness of each pixel and selects a pattern of a suitable density to represent it.

The two cords supporting the plotter’s pen are wound around the shafts of two stepper motors, such that the movement of the motors controls the length of the cords: the program next calculates how much each motor must move in order to produce the pattern. The Raspberry Pi passes corresponding instructions to two motor circuits, which transform the signals to a higher voltage and pass them to the stepper motors. These turn by very precise amounts, winding or unwinding the cords and, very slowly, dragging the pen across the paper.

A Raspberry Pi in a case, with a wide flex connected to a GPIO header
The Inky Lines plotter's print head, featuring cardboard and tape, draws an apparently random squiggle
A large area of apparently random pattern drawn by the plotter

John explains,

Suspended in-between the two motors is a print head, made out of a new 3-d modelling material I’ve been prototyping called cardboard. An old coat hanger and some velcro were also used.

(He’s our kind of maker.)

Unique images

The earlier drawings that John made used a repeatable method to render image files as lines on paper. That is, if the machine drew the same image a number of times, each copy would be identical. More recently, though, he has been using a method that yields random movements of the pen:

The pen point is guided around the image, but moves to each new point entirely at random. Up close this looks like a chaotic squiggle, but from a distance of a couple of meters, the human eye (and brain) make order from the chaos and view an infinite number of shades and a smoother, less mechanical image.

An apparently chaotic squiggle

This method means that no matter how many times the polargraph repeats the same image, each copy will be unique.

A gallery of work

Inky Lines’ website and its Instagram feed offer a collection of wonderful pieces John has drawn with his polargraph, and he discusses the different techniques and types of image that he is exploring.

A 3 x 3 grid of varied and colourful images from inkylinespolargraph's Instagram feed

They range from holiday photographs, processed to extract particular features and rendered in silhouette, to portraits, made with a single continuous line that can be several hundred metres long, to generative images spirograph images like those pictured above, created by an algorithm rather than rendered from a source image.

The post Enchanting images with Inky Lines, a Pi‑powered polargraph appeared first on Raspberry Pi.