Posts in the Tech Category

Variable Font Support

Published 5 years, 8 months past

Firefox 62 ships today, bringing with it some real CSS goodness.  For one: float shapes!  Which means now, mainline Firefox users will see the text flow past the blender in “Handiwork” the same way Chrome users have for a long time now.

But an even bigger addition is support for variable fonts.  The ability to have one font file that mathematically describes variants on the base face means that all kinds of fine-grained typography is possible with far less bandwidth overhead and a major reduction in page weight.

However: bear in mind that like Safari, but unlike Chrome, Firefox’s variable-font support is dependent on the operating system on which is runs.  If you have Windows 10, or Max OS X 10.13, then you have variable font support in Firefox and Safari.  Earlier versions of those operating systems don’t support variable fonts, and so Safari and Firefox don’t either.  Chrome rolls its own variable-font support, so it can extend support backwards in the OS timeline.

(I don’t know how things stand in the Linux world.  Hopefully someone can clear things up in the comments!)

I say this not to chastise Firefox (nor Safari), because I tend to think leaning on the OS for this sort of thing is reasonable.  I feel the same way about form elements like <select> dropdowns, to be clear, which I know likely places me in the minority.  The point here is to give you a heads-up: if you get reports that a font isn’t doing the variable thing you styled, but it’s working fine for you, keep “check their operating system version” on your list of diagnostic tests.


Firefox’s :screenshot command

Published 5 years, 8 months past

Back in 2015, I wrote about Firefox’s screenshot utility, which used to be a command in the GCLI.  Well, the GCLI is gone now, but the coders at Mozilla have brought command-line screenshotting back with :screenshot, currently available in Firefox Nightly and Firefox Dev Edition.  It’s available in the Web Console (⌥⌘K or Tools → Web Developer → Console).

An image I captured by typing :screenshot --dpr 0.5 in the Web Console

Once you’re in the Web Console, you can type :sc and then hit Tab to autocomplete :screenshot. From there, everything is the same as I wrote in 2015, with the exception that the --imgur and --chrome options no longer exist.  There are plans to add uploading to Firefox Screenshots as a replacement for the old Imgur option, but as of this writing, that’s still in the future.

So the list of :screenshot options as of late August 2018 is:

--clipboard
Copies the image to your OS clipboard for pasting into other programs.  Prevents saving to a file unless you use the --file option to force file-writing.
--delay
The time in seconds to wait before taking the screenshot; handy if you want to pop open a menu or invoke a hover state for the screenshot.  You can use any number, not just integers.
--dpr
The Device Pixel Ratio (DPR) of the captured image.  Values above 1 yield “zoomed-in” images; values below 1 create “zoomed-out“ results.  See the original article for more details.
--fullpage
Captures the entire page, not just the portion of the page visible in the browser’s viewport.  For unusually long (or wide) pages, this can cause problems like crashing, not capturing all of the page, or just failing to capture anything at all.
--selector
Accepts a CSS selector and captures only that element and its descendants.
--file
When true, forces writing of the captured image to a file, even if --clipboard is also being used.  Setting this to false doesn’t seem to have any effect.
--filename
Allows you to set a filename rather than accept the default.  Explicitly saying --filename seems to be optional; I find that writing simply :screenshot test yields a file called test.png, without the need to write :screenshot --filename test. YFFMV.

I do have one warning: if you capture an image to a filename like test.png, and then you capture to that same filename, the new image will overwrite the old image.  This can bite you if you’re using the up-arrow history scroll to capture images in quick succession, and forget to change the filename for each new capture.  If you don’t supply a filename, then the file’s name uses the pattern of your OS screen capture naming; e.g., Screen Shot 2018-08-23 at 16.44.41.png on my machine.

I still use :screenshot to this day, and I’m very happy to see it restored to the browser — thank you, Mozillans!  You’re the best.


Securing Web Sites Made Them Less Accessible

Published 5 years, 9 months past

In the middle of last month (July 2018), I found myself staring at a projector screen, waiting once again to see if Wikipedia would load.  If I was lucky, the page started rendering 15-20 seconds after I sent the request.  If not, it could be closer to 60 seconds, assuming the browser didn’t just time out on the connection.  I saw a lot of “the server stopped responding” over the course of a few days.

It wasn’t just Wikipedia, either.  CNN International had similar load times.  So did Google’s main search page.  Even this here site, with minimal assets to load, took a minimum of 10 seconds to start rendering.  Usually longer.

In 2018?  Yes.  In rural Uganda, where I was improvising an introduction to web development for a class of vocational students, that’s the reality.  They can have a computer lab full of Dell desktops running Windows or rows of Raspberry Pis running Ubuntu or whatever setup there is, but when satellites in geosynchronous earth orbit are your only source of internet, you wait.  And wait.  And wait.

I want to explain why — and far more importantly, how we’ve made that experience interminably worse and more expensive in the name of our comfort and security.

First, please consider the enormously constrained nature of satellite internet access.  If you’re already familiar with this world, skip ahead a few paragraphs; but if not, permit me a brief description of the challenges.

For geosynchronous-satellite internet access, the speed of light become a factor in ping times: just having the signals propagate through a mixture of vacuum and atmosphere chews up approximately half a second of travel time over roughly 89,000 miles (~152,000km).  If that all that distance were vacuum, your absolute floor for ping latency is about 506 milliseconds.

That’s just the time for the signals to make two round trips to geosynchronous orbit and back.  In reality, there are the times to route the packets on either end, and the re-transmission time at the satellite itself.

But that’s not the real connection killer in most cases: packet loss is.  After all, these packets are going to orbit and back.  Lots of things along those long and lonely signal paths can cause the packets to get dropped.  50% packet loss is not uncommon; 80% is not unexpected.

So, you’re losing half your packets (or more), and the packets that aren’t lost have latency times around two-thirds of a second (or more).  Each.

That’s reason enough to set up a local caching server.  Another, even more pressing reason is that pretty much all commercial satellite connections come with data caps.  Where I was, their cap was 50GB/month.  Beyond that, they could either pay overages, or just not have data until the next month.  So if you can locally cache URLs so that they only count against your data usage the first time they’re loaded, you do that.  And someone had, for the school where I was teaching.

But there I stood anyway, hoping my requests to load simple web pages would bear fruit, and I could continue teaching basic web principles to a group of vocational students.  Because Wikipedia wouldn’t cache.  Google wouldn’t cache.  Meyerweb wouldn’t cache.  Almost nothing would cache.

Why?

HTTPS.

A local caching server, meant to speed up commonly-requested sites and reduce bandwidth usage, is a “man in the middle”.  HTTPS, which by design prevents man-in-the-middle attacks, utterly breaks local caching servers.  So I kept waiting and waiting for remote resources, eating into that month’s data cap with every request.

The drive to force every site on the web to HTTPS has pushed the web further away from the next billion users — not to mention a whole lot of the previous half-billion.  I saw a piece that claimed, “Investing in HTTPS makes it faster, cheaper, and easier for everyone.”  If you define “everyone” as people with gigabit fiber access, sure.  Maybe it’s even true for most of those whose last mile is copper.  But for people beyond the reach of glass and wire, every word of that claim was wrong.

If this is a surprise to you, you’re by no means alone.  I hadn’t heard anything about it, so I asked a number of colleagues if they knew about the problem.  Not only had they not, they all reacted the same way I did: this must not be an actual problem, or we’d have heard about it!  But no.

Can we do anything?  For users of up-to-date browsers, yes: service workers create a “good” man in the middle that sidesteps the HTTPS problem, so far as I understand.  So if you’re serving content over HTTPS, creating a service worker should be one of your top priorities right now, even if it’s just to do straightforward local caching and nothing fancier.  I haven’t gotten one up for meyerweb yet, but I will do so very soon.

That’s great for modern browsers, but not everyone has the option to be modern.  Sometimes they’re constrained by old operating systems to run older browsers, ones with no service-worker support: a lab full of Windows XP machines limited to IE8, for example.  Or on even older machines, running Windows 95 or other operating systems of that era.  Those are most likely to be the very people who are in situations where they’re limited to satellite internet or other similarly slow services with unforgiving data caps.  Even in the highly-wired world, you can still find older installs of operating systems and browsers: public libraries, to pick but one example.  Securing the web literally made it less accessible to many, many people around the world.

Beyond deploying service workers and hoping those struggling to bridge the digital divide make it across, I don’t really have a solution here.  I think HTTPS is probably a net positive overall, and I don’t know what we could have done better.  All I know is that I saw, first-hand, the negative externality that was pushed onto people far, far away from our data centers and our thoughts.

My thanks to Tim Kadlec and Ethan Marcotte for their feedback and insight while I was drafting this post, and to Lara Hogan and Aaron Gustafson for their early assistance wth my research.


What is the CSS ‘ch’ Unit?

Published 5 years, 10 months past

I keep seeing authors and speakers refer to the ch unit as meaning “character width”.  This leads to claims that you can “make your content column 60 characters wide for maximum readability” or “size images to be a certain number of characters!”

Well… yes and no.  Specifically, yes if you’re using fixed-width fonts.  Otherwise, mostly no.

This is because, despite what the letters ch might imply, ch units are not “character” units.  They are defined as:

Equal to the used advance measure of the “0” (ZERO, U+0030) glyph found in the font used to render it. (The advance measure of a glyph is its advance width or height, whichever is in the inline axis of the element.)

So however wide the “0” character is in a given typeface, that’s the measure of one ch.  In monospace (fixed-width) fonts, where all characters are the same width, 1ch equals one character.  In proportional (variable-width) fonts, any given character could be wider or narrower than the “0” character.

To illustrate this, here are a few example elements which are set to be exactly 20ch wide, and also contain exactly 20 characters.

Courier

Look, 20 characters. abcdefghijklmnopqrst 12345678901234567890 iiiiiiiiiiiiiiiiiiii mmmmmmmmmmmmmmmmmmmm

Helvetica

Look, 20 characters. abcdefghijklmnopqrst 12345678901234567890 iiiiiiiiiiiiiiiiiiii mmmmmmmmmmmmmmmmmmmm

Georgia

Look, 20 characters. abcdefghijklmnopqrst 12345678901234567890 iiiiiiiiiiiiiiiiiiii mmmmmmmmmmmmmmmmmmmm

It’s probably no surprise that in Courier, all the elements are the exact same width as their text contents.  In Helvetica, by contrast, this is mostly not the case except for numbers, which appear to be fixed-width.  In Georgia, by contrast, none of the text contents fit the boxes, not even the numbers.

What I’ve found through random experimentation is that in proportional typefaces, 1ch is usually wider than the average character width, usually by around 20-30%.  But there are at least a few typefaces where the zero symbol is skinny with respect to the other letterforms; in such a case, 1ch is narrower than the average character width.  Trajan Pro is one example I found where the zero was a bit narrower than the average, but I’m sure there are others. Conversely, I’m sure there are typefaces with Big Fat Zeroes, in which case the difference between ch and the average character width could be around 50%.

So in general, if you want an 80-character column width and you’re going to use ch to size it, aim for about 60ch, unless you’re specifically working with a typeface that has a skinny zero.  And if you’re working with multiple typefaces, say one for headlines and another for body copy, be careful about setting ch measures and thinking they’ll be equivalent between the two fonts.  The odds are very, very high they won’t be.

It would be interesting to see the Working Group take up the idea of average character width as a unit of measure — say, 1acw or possibly just 1cw — which actually uses all the letterforms in a typeface to calculate an average value.  That would get a lot closer to “make your columns 60 characters wide!” in a lot more cases than ch does now.


Specificity in :not(), :has(), and :matches()

Published 5 years, 11 months past

A few years back, I wrote a short post on specificity, element proximity, and the negation pseudo-class.  Everything in it is still accurate and relevant, but I have some updates to share.

First off, I’d like to clarify something that some people may have found confusing.  In that post, I said:

But it turns out that the negation pseudo-class isn’t counted as a pseudo-class.

That might leave some people with the idea that the entire negation portion of the selector is ignored for the purposes of specificity, especially if you don’t speak spec.

So consider the following:

div:not(.one) p

In order from left to right, that’s an element selector (div), a negation pesudo-class (:not) a class selector (.one), and another element selection (p).  Two element selectors and one class selector are counted towards the specificity, yielding a total of 0,0,1,2.  That’s the same specificity as div.one p, though the two selectors select very different things.

In Ye Olden Days, that was easy enough to work out, because :not() could only ever contain a simple selector.  Things are looking to get more complicated, however — :not() is set to accept grouped selectors.  So we will at some point be able to say:

div:not(.one, .two, #navbar) p

So any p element that is not descended from a div that has a class containing either one or two (or both), or that has an id of navbar, will be selected.

But how do we calculate the specificity of that whole selector?  Just add up all the pieces?  No.  The Working Group recently decided that the specificity contributed from inside a :not() will be equal to the single selector with the highest specificity.  So given div:not(.one, .two, #navbar) p, the #navbar will contribute 0,1,0,0 to the overall specificity of the selector, yielding a total of 0,1,0,2.  The specificities of .one and .two are ignored.

This same approach will be taken with the :has() and :matches() pseudo-classes.  Thus we get the following:


:matches(nav, header, footer#pageend) a[href] {color: silver;}  /* 0,1,1,2 */
article:has(a.external, a img)  /* 0,0,1,2 */
input:not([type="radio"], [type="checkbox"])  /* 0,0,1,1 */

In the first instance, the bits that are added together are footer#pageend and a[href], so that’s one ID, one attribute, and two elements.  In the second, it’s article and a.external for one class and two elements.  And last, we add up input and either of the [type=""] attribute selectors, since their specificities are equal, which means we add up one attribute and one element.

There is still, so far as I’m aware, no concept of DOM-tree proximity in CSS.  I would still continue to wager that will remain true, though I’d put a fair bit less money down now than I would have six years ago.


Displaying CSS Breakpoint Information with Generated Content

Published 6 years, 2 months past

In the course of experimenting with an example design for my talks at An Event Apart this year, I came up with a way to keep track of which breakpoint was in force as I tested the design’s responsiveness.  I searched the web to see if anyone else had written about this and didn’t come up with any results, so I’ll document it here.  And probably also in the talks.

What I found was that, since I was setting breakpoints in ems instead of pixels, the responsive testing view in browsers didn’t really help, because I can’t maintain realtime mapping in my head from the current pixel value to however many rems it equals.  Since I don’t think the browser has a simple display of that information, I decided I’d do it myself.

It starts with some generated content:

body::before {content: "default";
   position: fixed; top: 1px; right: 1px; z-index: 100; padding: 1ch;
   background: rgba(0,0,0,0.67); color: rgba(255,255,255,0.75);
   font: bold 0.85rem Lucida Grande, sans-serif;}

You can of course change these to some other placement and appearance.  You can also attach these styles to the html element, or your page wrapper if you have one, or honestly even the footer of your document — since the position is fixed, it’ll be viewport-relative no matter where it originates.  The real point here is that we’re generating a bit of text we can change at each breakpoint, like so:

@media (max-width: 38em) {
   body::before {content: "<38em";}
   /* the rest of the breakpoint styles here */
}
@media (max-width: 50em) {
   body::before {content: "<50em";}
   /* the rest of the breakpoint styles here */
}
@media (min-width: 80em) {
   body::before {content: ">80em";}
   /* the rest of the breakpoint styles here */
}

The labels can be any string you want, so you can use “Narrow”, “Wide”, and so on just as easily as showing the measure in play, as I did.

The downside for me is that we automatically can’t make the labels cumulative in native CSS.  That means the order the @media blocks appear will determine which label is shown, even if multiple blocks are being applied.  As an example, given the styles above, at a width of 25em, the label shown will be <50em even though both the 38em and 50em blocks apply.

There are ways around this, like switching the order of the max-width blocks so the 38em block comes after the 50em block.  Or we could play specificity games:

@media (max-width: 38em) {
   html body::before {content: "<38em";}
   /* the rest of the breakpoint styles here */
}
@media (max-width: 50em) {
   body::before {content: "<50em";}
   /* the rest of the breakpoint styles here */
}

That’s not a solution that scales, sadly.  Probably better to sort the max-width media blocks in descending order, if you think you might end up with several.

The upside is that it’s easy to find and remove these lines once the development phase moves to QA.  Even better, before that point, you get a fully customizable in-viewport indication of where you are in the breakpoint stack as you look at the work in progress.  It’s pretty trivial to take this further by also changing the background color of the little box.  Maybe use a green for all the block above the “standard” set, and a red for all those below it.  Or toss in little background image icons of a phone or a desktop, if you have some handy.

So that’s the quick-and-dirty little responsive development hack I came up with this morning.  I hope it’s useful to some of you out there — and, if so, by all means share and enjoy!


Addendum: Emil Björklund proposes a variant approach that uses CSS Custom Properties (aka CSS variables) to implement this technique.


“CSS Pocket Reference, 5th Edition” to Production

Published 6 years, 2 months past

Just over an hour before I started writing this post, I handed off CSS Pocket Reference, 5th Edition to the Production department at O’Reilly.  What that means, practically speaking, is that barring any changes that the editors find need to be made, I’m done.

Besides all the new-new-NEW properties included in this edition (flexbox and grid being just two of the most obvious examples), we put a lot into improving the formatting for this edition.  Previous editions used a more sprawling format that led to the 4th edition getting up to 238 pages, which cast serious shade on the word “Pocket” there in the title.  After all, not all of us live in climates or cultures where 24/7 cargo pants are a viable option.

So with a few ideas from me and several more from the production team, we managed to add in all the new properties and still bring the page count down below 200.  My guess is the final copy will come in about 190 pages, but much will depend on how crazy the indexer gets, and how much the formatting gets changed in the final massaging.

We don’t have a firm release date yet; I’m pulling for April, but it’s really not up to me.  I’ll make announcements via all the usual channels when pre-order is available, and of course when publication day arrives.

For now, for the first time in many years, I don’t have a book project on my to-do list.  I don’t even have a book proposal on my to-do list.  It’s a slightly weird feeling, but not an unwelcome one.  I’ll be putting the extra time into my content for An Event Apart: I’m giving a talk this year on using the new CSS tools to make our jobs easier, and doing Day Aparts in Boston and San Francisco where I spend six hours diving deep into guts of stuff like flexbox Grid, writing modes, features queries, and a whole lot more.

So my time will continue to be fully spoken for, is what I’m saying.  It’ll all be fun stuff, though, and it’s hard to ask for more out of my work.


Headings and Labels

Published 6 years, 3 months past

Following on my last two posts about accessibility improvements to meyerweb, I’ve made two more adjustments: better heading levels and added ARIA labels.

For the heading levels, the problem I face is one familiar to many authors: what makes sense as an <h1> in some situations needs to be an <h2> in others.  The most common example is the titles of blog posts like this one.  On its permalink page, the title of the page is the title of the post.  There, it should be an <h1>.  On archive pages, including the home page of meyerweb, there are a number of posts shown one after the other.  In those situations, each post title should be an <h2>.

Part of the redesign’s changes were to write a single PHP routine that generated posts and their markup, which I could then simply call from wherever.  So I added an optional function parameter that allowed me to indicate the context in which a post was being placed.  It goes something like this:

<?php blogpostMarkup("archive"); ?>
function blogpostMarkup($type = "standalone") {
    if ($type == "archive") $titletag = "h2"; else $titletag = "h1";
    // …markup is all generated here…
    echo $output;
}

Or code to that effect.  (I did not go copy-paste from my actual code base.)

So now, heading levels are what they should be, at least on most pages (I may have missed updating some of my old static HTML pages; feel free to point them out in the comments if you find one).  As a part of that effort, I removed the <h1> from the masthead except on the home page, being the one place it makes sense to be an <h1>.

As for ARIA labels, that came about due to a comment from Phil Kragnes on my last post, where he observed that pages often have multiple elements with a role of navigation.  In order to make things more clear to ARIA users, I took Phil’s suggestion to add aria-label attributes with clarifying values.  So for the page-top skiplinks, I have:

<nav role="navigation" aria-label="page" id="skiplinks">

Similarly, for the site-navigation bar, I have:

<nav role="navigation" aria-label="site" id="navigate">

The idea is that screen readers will say “Page navigation region” and “Site navigation region” rather than just repeating “Navigation region” over and over.

Other than cleaning up individual pages’ heading levels and the occasional custom layout fix (e.g., the Color Equivalents Table needed a local widening of the content column’s maximum size), I think the redesign has settled into the “occasional tinkering” phase.  I may do something to spruce up my old Web Review articles (like the very first, written when HTML tags were still uppercase!) and I’m thinking about adding subnavigation in certain sections, but otherwise I think this is about it.  Unless I decide to go really over the top and model my Tools page after Simon St. Laurent’s lovely new Grid design, that is…

Of course, if you see something I overlooked, don’t hesitate to let me know!  I can’t guarantee fast response, but I can always guarantee careful consideration.


Browse the Archive

Earlier Entries

Later Entries