Posts in the Commentary Category

Get Static

Published 4 months, 2 weeks past

If you are in charge of a web site that provides even slightly important information, or important services, it’s time to get static.  I’m thinking here of sites for places like health departments (and pretty much all government services), hospitals and clinics, utility services, food delivery and ordering, and I’m sure there are more that haven’t occurred to me.  As much as you possibly can, get it down to static HTML and CSS and maybe a tiny bit of enhancing JS, and pare away every byte you can.

Because too many sites are already crashing because their CMSes can’t keep up with the traffic surges.  And too many sites are using dynamic frameworks that drain mobile batteries and shut out people with older browsers.  That’s annoying and counter-productive in the best of times, but right now, it’s unacceptable.  This is not the time for “well, this is as performant as our stack gets, so I guess users will have to live with it”.  Performance isn’t just something to aspire to any more.  Right now, in some situations, performance could literally be life-saving to a user, or their family.

We’re in this to serve our users.  The best service you can render at this moment is to make sure they can use your site or service, not get 502/Bad Gateway or a two-minute 20%-battery-drain page render.  Everything should take several back seats to this effort.

I can’t tell you how best to get static—only you can figure that out.  Maybe for you, getting static means using very aggressive server caching and a cache-buster approach to updating info.  Maybe it means using some kind of static-render plugin for your CMS.  Maybe is means accelerating a planned migration to a static-site CMS like Jekyll or Eleventy or Grav.  Maybe it means saving pages as HTML from your browser and hand-assembling a static copy of your site for now.  There are a lot of ways to do this, but whatever way you choose, do it now.


Addendum: following are a few resources that can help. Please suggest more in the comments!


Trying to Work From Home

Published 4 months, 3 weeks past

I’ve been working from home for (checks watch) almost 19 years now, and I’d love to share some tips with you all on how to make it work for you.

Except I can’t, because this has been incredibly disruptive for me.  See, my home is usually otherwise empty during the day—spouse at work, kids at school—which means I can crank up the beats and swear to my heart’s content at my code typos.  Now, not only do I have to wear headphones and monitor my language when I work, I also am surrounded by office-mates who basically play video games and watch cat videos all day, except for those times when I really get focused on a task, when they magically sense it’s the perfect time to come ask me random questions that could have waited, derailing my focus and putting me back at square one.

Which, to most of you used to working in an office setting, I suppose might seem vaguely familiar.  I’m not used to it at all.

Here’s what I can tell you: if you’re having trouble focusing on work, or anything else, it’s not that you’re terrible at working from home or bad at your job.  It’s that you’re doing this in a set of circumstances completely unprecedented in our lifetimes.  It’s that you’re doing this while worried not only about keeping yourself and your loved ones safe from a global pandemic, but probably also worried about your continued employment—not because you’re doing badly, but because the economy is on the verge of freezing up completely.  No spending means no business income means no salaries means no money to spend.

We can hope for society-level measures to unjam the economic engine, debt leniency or zero-interest loans or Universal Basic Income or what have you, but until those measures exist and begin to work together, we’re all stumbling scared in a pitch-black forest.  Take it from someone who has been engulfed by overwhelming, frightening, pitiless circumstances before: Work can be a respite, but it’s hard to sustain that retreat.  It’s hard to motivate yourself to even think about work, let alone do a good job.

Be forgiving of yourself.  Give yourself time and space to process the fear, to work through it and you.  Find a place for yourself in relation to it, so that you can exist beside it without it always disrupting your thoughts.  That’s the only way I know to free up any mental resources to try to do good work.  It also puts you in a place where you can act with some semblance of reason, instead of purely from fear.

Stay safe, friends.  We have a long, unknown road ahead.  Adjustment will be a long time in coming.  Support each other as much as you can.  Community is, in the end, the most resilient and replenishing force we have.


Running Code Over Time

Published 7 months, 1 week past

I’m posting this on the last day of 2019.  As I write it, the second post I ever made on meyerweb says it was published “20 years, 6 days ago”.  It was published on the second-to-last day of 1999, which was 20 years and one day ago.

Up top, the date when I wrote this post.  Down below, a five-day error.

What I realized, once the discrepancy was pointed out to me (hat tip: Eric Portis), is the five-day error is there because in the two decades since I posted it, there have been five leap days.  When I wrote the code to construct those relative-time strings, I either didn’t think about leap days, or if I did, I decided a day or two here and there wouldn’t matter all that much.

Which is to say, I failed to think about the consequences of my code running over long periods of time.  Maybe a day or two of error isn’t all that big a deal, in human-friendly relative-time output.  If a post was six years and two days ago but the code says 6 and 1, well, nobody will really care that much even if they notice.  But five days is noticeable, and what’s more, it’s a little human-unfriendly.  It’s noticeable.  It jars.

I think a lot of us tend not to think about running code over long time periods.  I don’t even mean long time periods in “Web years”, though I could—the Web is now just about three decades old, depending on when you reckon the start date.  (Births of the web are like standards—there are so many to choose from!)  I mean human-scale long periods of time, code that is old enough to have been running before your children were born, or maybe even you yourself.  Code that might still be running long after you die, or your children do.

The Web often seems ephemeral.  Trends shift, technologies advance, frameworks flare and fade.  There’s always so much new shiny stuff to try, new things to learn, that it seems like nothing will last.  We say “the Internet never forgets” (even though it does so all the time) but it’s lip service at this point.  And yet, the first Web pages are still online and accessible.

Which means the code that underpins them, from the HTML to the HTTP, is all still running and viable after 30 years.  The foresight that went into those technologies, and the bedrock commitment to consistency over long time frames, is frankly incredible.  It’s inspiring.  Compatibility both forwards and backwards in time, over decades and perhaps eventually centuries, is a remarkable achievement.

As I write this, I have yet to fix my relative-time code.  It’s on my list now, and I plan to get to it soon.  I presume the folks creating Intl.RelativeTimeFormat (hat tip: Amelia Bellamy-Royds) put a lot more thought into their code than I did mine, because they know they’re writing for long time frames.

What I need to do is adopt that same mindset.  I think we all do.  We should think of our code, even our designs, as running for decades, and alter our work to match.  I don’t necessarily know what that means, but we’ll never find out unless we try.


The Broken Physics of “The Umbrella Academy” Finale

Published 1 year, 3 months past

Not long ago, Kat and I got around to watching The Umbrella Academy’s first season on Netflix.  I thought it was pretty good!  It was a decent mix of good decisions and bad decisions by people in the story, I liked most of the characters and their portrayals, and I thought the narrative arcs came to good places. Not perfect, but good.

Except.  I have to talk about the finale, people.  I have to get into why the ending, the very last few minutes of season one, just didn’t work for me.  And in order to do that, I’m going to deploy, for the first time ever, a WordPress Spoiler Cut™ on this here blog o’ mine, because this post is spoilerrific.  Ready?  Here we go.

Massive, massive spoilers and a fair amount of science ahead!

Securing Web Sites Made Them Less Accessible

Published 2 years, 5 days past

In the middle of last month (July 2018), I found myself staring at a projector screen, waiting once again to see if Wikipedia would load.  If I was lucky, the page started rendering 15-20 seconds after I sent the request.  If not, it could be closer to 60 seconds, assuming the browser didn’t just time out on the connection.  I saw a lot of “the server stopped responding” over the course of a few days.

It wasn’t just Wikipedia, either.  CNN International had similar load times.  So did Google’s main search page.  Even this here site, with minimal assets to load, took a minimum of 10 seconds to start rendering.  Usually longer.

In 2018?  Yes.  In rural Uganda, where I was improvising an introduction to web development for a class of vocational students, that’s the reality.  They can have a computer lab full of Dell desktops running Windows or rows of Raspberry Pis running Ubuntu or whatever setup there is, but when satellites in geosynchronous earth orbit are your only source of internet, you wait.  And wait.  And wait.

I want to explain why—and far more importantly, how we’ve made that experience interminably worse and more expensive in the name of our comfort and security.

First, please consider the enormously constrained nature of satellite internet access.  If you’re already familiar with this world, skip ahead a few paragraphs; but if not, permit me a brief description of the challenges.

For geosynchronous-satellite internet access, the speed of light become a factor in ping times: just having the signals propagate through a mixture of vacuum and atmosphere chews up approximately half a second of travel time over roughly 89,000 miles (~152,000km).  If that all that distance were vacuum, your absolute floor for ping latency is about 506 milliseconds.

That’s just the time for the signals to make two round trips to geosynchronous orbit and back.  In reality, there are the times to route the packets on either end, and the re-transmission time at the satellite itself.

But that’s not the real connection killer in most cases: packet loss is.  After all, these packets are going to orbit and back.  Lots of things along those long and lonely signal paths can cause the packets to get dropped.  50% packet loss is not uncommon; 80% is not unexpected.

So, you’re losing half your packets (or more), and the packets that aren’t lost have latency times around two-thirds of a second (or more).  Each.

That’s reason enough to set up a local caching server.  Another, even more pressing reason is that pretty much all commercial satellite connections come with data caps.  Where I was, their cap was 50GB/month.  Beyond that, they could either pay overages, or just not have data until the next month.  So if you can locally cache URLs so that they only count against your data usage the first time they’re loaded, you do that.  And someone had, for the school where I was teaching.

But there I stood anyway, hoping my requests to load simple web pages would bear fruit, and I could continue teaching basic web principles to a group of vocational students.  Because Wikipedia wouldn’t cache.  Google wouldn’t cache.  Meyerweb wouldn’t cache.  Almost nothing would cache.

Why?

HTTPS.

A local caching server, meant to speed up commonly-requested sites and reduce bandwidth usage, is a “man in the middle”.  HTTPS, which by design prevents man-in-the-middle attacks, utterly breaks local caching servers.  So I kept waiting and waiting for remote resources, eating into that month’s data cap with every request.

The drive to force every site on the web to HTTPS has pushed the web further away from the next billion users—not to mention a whole lot of the previous half-billion.  I saw a piece that claimed, “Investing in HTTPS makes it faster, cheaper, and easier for everyone.”  If you define “everyone” as people with gigabit fiber access, sure.  Maybe it’s even true for most of those whose last mile is copper.  But for people beyond the reach of glass and wire, every word of that claim was wrong.

If this is a surprise to you, you’re by no means alone.  I hadn’t heard anything about it, so I asked a number of colleagues if they knew about the problem.  Not only had they not, they all reacted the same way I did: this must not be an actual problem, or we’d have heard about it!  But no.

Can we do anything?  For users of up-to-date browsers, yes: service workers create a “good” man in the middle that sidesteps the HTTPS problem, so far as I understand.  So if you’re serving content over HTTPS, creating a service worker should be one of your top priorities right now, even if it’s just to do straightforward local caching and nothing fancier.  I haven’t gotten one up for meyerweb yet, but I will do so very soon.

That’s great for modern browsers, but not everyone has the option to be modern.  Sometimes they’re constrained by old operating systems to run older browsers, ones with no service-worker support: a lab full of Windows XP machines limited to IE8, for example.  Or on even older machines, running Windows 95 or other operating systems of that era.  Those are most likely to be the very people who are in situations where they’re limited to satellite internet or other similarly slow services with unforgiving data caps.  Even in the highly-wired world, you can still find older installs of operating systems and browsers: public libraries, to pick but one example.  Securing the web literally made it less accessible to many, many people around the world.

Beyond deploying service workers and hoping those struggling to bridge the digital divide make it across, I don’t really have a solution here.  I think HTTPS is probably a net positive overall, and I don’t know what we could have done better.  All I know is that I saw, first-hand, the negative externality that was pushed onto people far, far away from our data centers and our thoughts.

My thanks to Tim Kadlec and Ethan Marcotte for their feedback and insight while I was drafting this post, and to Lara Hogan and Aaron Gustafson for their early assistance wth my research.


Increasing Accessibility

Published 2 years, 7 months past

Thanks to the fantastic comments on my previous post, I’ve made some accessibility improvements.  Chief among them: adding WAI-ARIA role values to various parts of the structure.  These include:

  • role="banner" for the site’s masthead
  • role="navigation" added to the navigation links, including subnavigation links like previous/next posts
  • role="main" for the main portion of a page
  • role="complementary" for sidebars in the blog archives
  • role="article" for any blog post, whether there are several on a page or just one

In addition, I restored skip links to the masthead of most pages (the rest will get them soon).  The links are revealed on keyboard focus, which I’m not sure I like.  I feel like these aren’t quite where they need to be.  A big limitation is the lack of :matches() (or similar) support in browsers, since I’d love to have any keyboard focus in the masthead or navigation links bring up the skip links, which requires some sort of parent selection.  I may end up using a tiny bit of enhancing Javascript to make the links’ UX more robust in JS situations, but still obviously available if JS fails.  And I may replicate them in the footer, as a way to quickly jump back up the page, especially to the navigation.

Speaking of the navigation links, they’ve been moved in the source order to match their place in the visual layout.  My instincts with regard to source order and layout placement were confirmed to be woefully out of date: the best advice now is to put the markup where the layout calls for the content to be.  If you’re putting navigation links just under the masthead, then put their markup right after the masthead’s markup.  So I did that.

The one thing I didn’t change is heading levels, which suffer all the usual problems.  Right now, the masthead’s “meyerweb.com” is always an <h1> and the page title (or blog post titles) are all <h2>.  If I demoted the masthead content to, say, a plain old <div>, and promoted the post headings, then on pages like the home page, there’d be a whole bunch of <h1>s.  I’ve been told that’s a no-no.  If I’m wrong about that, let me know!

There’s still more to do, but I was able to put these into place with no more than a few minutes’ work, and going by what commenters told me, these will help quite a bit.  My thanks to everyone who contributed their insights and expertise!


The Newwwyear Design

Published 2 years, 7 months past

Well, here it is—the first new design for meyerweb since February 2005 (yes, almost 13 years).  It isn’t 100% complete, since I still want to tweak the navigation and pieces of the footer, but it’s well past the minimum-viable threshold.

My core goal was to make the site, particularly blog posts, more readable and inviting.  I think I achieved that, and I hope you agree.  The design should be more responsive-friendly than before, and I think all my flex and grid uses are progressively enhanced.  I do still need to better optimize my use of images, something I hope to start working on this week.

Things I particularly like about the design, in no particular order:

  • The viewport-height-scaled masthead, using a minimum height of 20vh.  Makes it beautifully responsive, always allowing at least 80% of the viewport’s height to be given over to content, without requiring media queries.
  • The “CSS” and “HTML” side labels I added to properly classed pre elements.  (For an example, see this recent post.)
  • The fading horizontal separators I created with sized linear gradients, to stand in for horizontal rules.  See, for example, between post content and metadata, or underneath the navlinks up top of the page.  I first did this over at An Event Apart last year, and liked them a lot.  I may start decorating them further, which multiple backgrounds make easy, but for now I’m sticking with the simple separators.
  • Using string-based grid-template-areas values to rearrange the footer at mobile sizes, and also to make the rare sidebar-bearing pages (such as those relating to S5) more robust.

There are (many) other touches throughout, but those are some high points.

As promised, I did livestream most of the process, and archived copies of those streams are available as a YouTube playlist for those who might be interested.  I absolutely acknowledge that for most people, nine hours of screencasting overlaid with rambling monologue would be very much like watching paint dry in a hothouse, but as Abraham Lincoln once said: for those who like this sort of thing, this is the sort of thing they like.

I was surprised to discover how darned easy it is to livestream.  I know we live in an age of digital wonders, but I had somehow gotten it into my head that streaming required dedicated hardware and gigabit upstream connections.  Nope: my five megabit upstream was sufficient to stream my desktop in HD (or close to it) and all I needed to broadcast was encoding software (I used OBS) and a private key from YouTube, which was trivial to obtain.  The only hardware I needed was the laptop itself.  Having a Røde Podcaster for a microphone was certainly helpful, but I could’ve managed without it.

(I did have a bit of weirdness where OBS stopped recognizing my laptop’s camera after my initial tests, but before I went live, so I wasn’t able to put up a window showing me while I typed.  Not exactly a big loss there.  Otherwise, everything seemed to go just fine.)

My thanks to everyone who hung out in the chat room as I livestreamed.  I loved all the questions and suggestions—some of which made their way into the final design.  And extra thanks to Jen Simmons, who lit the fire that got me moving on this.  I enjoyed the whole process, and it felt like a great way to close the books on 2017.


Book Review: “Create with Code: Build Your Own Website”

Published 2 years, 8 months past

A thing people ask me with some regularity is, “What’s a good book for someone who wants to get started in web design?”  I’m here to review a book that’s one of the best I’ve seen, Create with Code: Build Your Own Website, written by Clyde Hatter of CoderDojo’s Dojo Bray in Ireland.  I got my copy at my son’s elementary school Scholastic Book Fair earlier this year; it’s available from online booksellers and probably through local bookstores as well.

I’ll go into some details of what’s in it and what I think, and there will be some complaints.  So I want to stress up front: this is an excellent book for people who want to learn web design, with the modifier if you’re available to help them out when they hit stumbling blocks.  You aren’t going to have to hold their hands through the whole thing by any stretch, but there are moments where, for example, the filenames used in the text will mislead.  (More on that anon.)  For all that, it’s still an excellent book, and I recommend it.

The book is 94 pages, of which 88 pages are instructional, and none of it is filler—Mr. Hatter packs a surprising amount of good web design practice into those 88 pages.  The pages themselves are filled with colorful design, and the text is easily readable.  It’s aimed squarely at elementary-school readers, and it shows.  That’s a good thing, I hasten to add.  The tone is simple, uncomplicated, and stripped to the essentials.  At no point does it condescend.  It works well for any age, not just the suggested range of 7-17.  I enjoyed reading it, even though I knew literally everything the book covers.

The organizing idea of the book is creating a small web site for a ninja band (!!!) called The Nanonauts.  In the course of the book, the reader sets up a home page, an About Us page, a page listing upcoming concerts, and a couple more.  Everything makes sense and interrelates, even if a couple of things feel ever so slightly forced.

Here’s a page-number timeline of concepts’ first introductions:

p. 6
Brainstorming site content and sketching a site map.  Bear in mind here that the actual instructional text starts on page 6.
p. 10
Adding a style sheet to an HTML document via a link element.
p. 14
A nice breakdown of how images are loaded into the page, what the various (common) image attributes are and mean, and the importance of good alt text.  On page 14.
p. 17
The concept of an empty element and how it differs from other elements.
pp. 20-24
An extended discussion of proper structure and good content for the web.  It shows how using headings and paragraphs breaks up large text walls, makes the distinction between ordered and unordered lists, and demonstrates the importance of proper element nesting.
p. 25
Diving into CSS.  A style sheet was added to the document back on page 10, but this is where CSS starts to be discussed in detail.
p. 28
Radial gradients!  They went there!  The syntax isn’t dissected and explained, but just showing working gradients clues readers in to their existence.  There’s also an example of styling html separately from body, without making a super big deal out of it.  This is a pattern throughout the rest of the book: many things are used without massively explaining them.  The author relies on the reader to repeat the example and see what happens.
pp. 30-32
A really great explanation of hexadecimal color values.  I’ve never seen better, to be honest.  That’s followed by a similarly great breakdown of the uses for, and differences between, px, em, and % values for sizing.
p. 36
The first of several really lovely step-by-step explanations of style blocks.  In this case, it’s styling a nav element with an unordered list of links, explaining the effects of each rule as it’s added.
pp. 50-52
An example of properly structuring and styling tabular data (in this case, a list of upcoming concerts).
p. 59
The box model and inline elements explained in sparing but useful detail.  This includes a brief look at inline elements and the baseline, and vertical alignment thereof.
p. 74
Responsive web design!  A nice introduction to media queries, and a quick primer on what responsive means and why it’s important.
p. 78
Floating images to wrap text around them. That segues into layout, using floats for the boxes enclosing the bits of content.
p. 88
Using web fonts (basically Google fonts).
p. 90
Putting your site online.

That isn’t everything that’s touched on in the book by a long shot—max-width and min-width show up early, as do :last-child, border-radius, and several more CSS features.  As I said above, these are generally introduced without much detailed explanation.  It’s a bold approach, but one that I think pays off handsomely.  Trusting the reader to become interested enough to track down the details on their own leaves room to include more things to spark interest.

Pages 10 and 11.  Not all pages are this text-heavy.

That said, there are some aspects that may—probably will—cause confusion.  The biggest of these has to do with images.  There are several instances of using img to add images to pages, as you’d expect.  The author does provide a downloadable archive of assets, which is a little difficult to actually find (here’s a direct link), but the real problem is that once you extract the files, the filenames don’t match the filenames in print.  For example, in the book there’s a reference to nanonauts.jpg.  The corresponding file in the archive is NINJA_FACE_FORWARD.png.  At another point, DSC03730.png turns out to actually be NINJA_GUITAR.png.  There’s no indication of this whatsoever in the book.

I get it: mistakes happen, and sometimes digital assets get out of step with print.  Nevertheless, I fear this could prove a major stumbling block for some readers.  They see one filename in the book, and that filename doesn’t exist in the assets.  Maybe they open up the asset images until they find the right one, and then maybe they figure out to replace the filename in the book with the one they found, and move on from there… but maybe they don’t.  I’d be a lot happier if there were an errata note and mapping table on the download page, or the online archive’s assets were corrected.

Something similar happens on page 19, where the reader is directed to create a navigation link to songs.html when the page they’ve already created is called our-songs.html.  This one is a lot more forgivable, since the filenames are at least close to each other.  But again, it’s a place the reader might get lost and frustrated.  The painful irony is that this error appears in a “NINJA TIP” box that starts out, “Be careful when you’re typing links. You have to get them exactly right!”

Another error of this kind happens in the section on adding a video to a page (p.45).  All the markup is there, and the URL they supply in great big text loads a video just fine.  The problem is that the video it loads is an ad for Scholastic, not the ninja-playing-a-guitar video the text very heavily implies it will be.  I don’t know if it used to be a rock ninja shredding power chords and Scholastic replaced it or what, but it almost feels like a bait and switch.  It was a little disheartening.

There’s one aspect I can’t quite make up my mind about, which is that just about everything in the book—text, design elements, media query breakpoints—is done using pixels.  A couple of percentage widths show up near the very end, but not much is said about them.  There is a very nice comparison of pixels, ems, and percentages on page 32, but with ems never being used (despite a claim to the contrary), readers are unlikely to use or understand them.

Now, I don’t style this way, and my every instinct rebels against it.  But given that pixels really don’t mean what they used to, and that all modern browsers will scale pages up and down pretty seamlessly, is this a major problem?  I’m not sure that it is.  Either way, this does set readers on a specific initial path, and if that path bothers you, it’s worth knowing about so you can give them extra advice.

The third thing I found weird was the two pages devoted to embedding a live Google Map into one of the pages (showing the location of the Nanonauts’ next show).  On the one hand, it’s cool in that it shows how some HTML elements (i.e., iframe) can serve as containers for external assets more complicated than images and videos, and having a live map show up in the page you’re building is probably pretty mind-blowing for someone just starting out.  On the other, it’s kind of a fiddly and unusual use case: not many novice sites need an embedded widget calling an API.

I had less of a problem with the author showing a simple image-swapping-on-hover JavaScript solution, later in the book (even though my hindbrain kept chanting, “do that with CSS!”).  It’s a simple example of scripting pieces of the page, and lets Mr. Hatter talk about the DOM and DOM scripting without getting super crazy about it.

The last thing I found a bit lacking was the closing two pages, which cover putting the site online.  The author does their best with the two pages, and what’s there is correct, but it’s just not enough to help everyone get the results of their work online.  I’m not sure two pages ever could be enough to help the novice audience.  I’d have liked to see this get four pages, so there was room for more detail and more options.  If you’re giving this book to someone to help them learn, my advice is to tell them up front that you’ll help them get their work online once they’ve finished the book.

Okay, all that said?  I still absolutely recommend this as a beginners’ book.  Nearly every topic the text introduces, and there are many I didn’t mention here, is covered just the right amount and in just the right way to get readers where they need to be, excited about what they’ve accomplished, and ready to learn more on their own.  It’s pretty well up to date, at least as I write this, and isn’t likely to fall badly out of date any time soon.  Approachable, accessible, and instructive.

Final grade: I give it a solid B+, only falling short of A level due to the filename mismatches.

Note: All images in this review are copyright The CoderDojo Foundation.  Some were taken from the book’s asset files.


Browse the Archive

Earlier Entries