Posts in the Web Category

No, Apple Did Not Crowdfund :focus-visible in Safari

Published 1 year, 8 months past

It’s not every week the release notes for a preview build of a web browser ignite Yet Another Twitter Teacup Storm (YATTS™), but that’s what happened when Safari Technology Preview 138 dropped late last week. At least, it’s what happened in the Twitter Teacups I tend to sip.

Just in case you missed it, here’s the summary:

  1. The WebKit team released Safari Technology Preview 138, and the release notes for same.
  2. The “CSS” section of the release notes started with a line saying:
    Enabled :focus-visible pseudo-class by default (r286783, r286776, r286775)
  3. A few people, including Jen Simmons, gave credit to Igalia for implementing :focus-visible by means of a crowdfunding project (more on that in a moment).
  4. KABOOM

I suppose I could be a bit more explicit in step 4, but I don’t really want to get into speculating on apparent motives and assumptions by others, because that’s not the point of this post. The point of this post is to clear up what seems to be a very common misunderstanding.

What I kept seeing people saying was something to the effect of, “Why the hell did Apple have to crowdfund this feature?” And that’s wrong in two ways:

  1. Apple doesn’t have to crowdfund anything, up to and including colonization of the Moon. (They might have to ask for a few bucks to do Venus or Mars.)
  2. Apple didn’t crowdfund :focus-visible.

This isn’t me splitting hairs, either. Nobody at Apple asked the crowd to fund anything. Nobody at Apple asked Igalia to crowdfund anything. They didn’t even ask Igalia to implement :focus-visible, and then Igalia decided to crowdfund the work. In fact, all of those assumptions get things almost exactly backwards — which is understandable! It’s what we expect from our experience of how the web has developed since at least the late 1990s. But here, something new happened.

So, let me summarize what happened using yet another ordered list:

  1. Igalia noticed they’d done a fair bit of work adding features to all the browser engines (e.g., CSS Grid), with each project supported by a single paying client, and thought, “Wait a minute, the web is a commons. Why are features being driven one client at a time?”
  2. Of its own volition, Igalia decided to experiment with the idea of letting the web community (the “crowd”) vote for implementation of a missing browser feature with their wallets (the “funding”). They called this ongoing experiment Open Prioritization, and launched it in 2019.
  3. There were six possible projects, chosen by Igalia through their own set of criteria, for the community to vote on by pledging monetary support:
    • CSS lab() colors in Firefox
    • :focus-visible in WebKit/Safari
    • HTML inert in WebKit/Safari
    • Selector list arguments for :not() in Chrome
    • CSS Containment support in WebKit/Safari
    • CSS d (SVG path) support in Firefox
  4. The winner was implementing :focus-visible in WebKit/Safari, and by “winner”, I mean that project got the most monetary commitment from the members of the community.
  5. Igalia matched the community contributions dollar for dollar, and moved forward with the work.
  6. The work was done, and submitted to the WebKit code base. (Along the way, inconsistencies and other problems were discovered, addressed, and fixes contributed to engines other than WebKit.)
  7. The WebKit team accepted Igalia’s contributions, and are now shipping them in a preview build of Safari for developers to test out.

In other words: the community (more precisely, a portion of it) voted on which feature was most needed, Igalia implemented it, and Apple accepted it. Apple’s role in this process came at the end, not the beginning.

And no, this is not the usual thing! It’s not supposed to be. Igalia is deeply committed to not just advancing the web, but to an unprecedented extent democratizing that advancement. It isn’t anything like a pure democratic effort, at least not yet, but these are early days and the initiative is structured to meet the current constraints of the environment (read: living under capitalism means coders gotta get paid).

But why is Igalia doing this? Time for another list! Just to switch things up, this one will be unordered:

  • Because the community should have more of a say in what gets prioritized in browsers. The community can be large collections of individuals, or it could be small collections of small companies, or a mix.
  • Because in every browser team, there’s always a priority list, and sometimes good features get pushed down that list for various reasons. It could be lack of expertise. It could be lack of time. It could be lack of interest. It could be interference by higher-ups. It doesn’t matter.
  • Because browser teams — not any one team, but the unfortunately small number of browser teams — are a bottleneck. No matter how much money the companies who employ those teams throw at them, they will always be a bottleneck, because resources are finite.

And this brings us to why I think “Wait, shouldn’t the $browser_name team have already done $feature_name by now? Why did an outside party have to do it?” is a little short-sighted. There will always be a $feature_name that the $browser_name team hasn’t done yet, for any value of $browser_name you care to posit. Today it could be WebKit; tomorrow, Chromium. In ten years, maybe there will be teams at Amazon and Huawei, making browser engines that compete for user share. Maybe not. Doesn’t actually matter, because however many or few engines there are, no matter what their priorities are, this problem will persist.

This is also why I’m not getting into Apple’s funding levels and priorities for WebKit and the web. Yes, there is much Apple-the-company can be criticized about, and personally, I am one of the biggest fans browser-engine diversity ever had, but that is a different conversation. Even if you could somehow wave a magic wand and open all platforms everywhere to engine diversity, and simultaneously cause a thousand browsers to bloom, we would still have the same basic problem. Open Prioritization would still need to exist.

For another piece of evidence on that point, look at the second Open Prioritization project: MathML-Core, whose goal is to bring full cross-browser support for the MathML Core specification to browsers, starting with Chrome (which needs the most work in this area) and then moving on to other engines (which need less work, but still need work). Doing this will not only improve support for web-wide math markup and its visual rendering, but will also improve the accessibility of math content on the web by making math a first-class content type in browsers. And you can even now contribute to this effort with a pledge of your own!

“But wait, why didn’t $browser_name already finish implementing MathML Core?” It doesn’t matter. Whether or not $browser_name (whichever one that is) should have done this by now, they haven’t. Maybe they would have done it eventually, but again, that doesn’t matter. We can make it happen now.

That’s what happened with :focus-visible in WebKit, which helped improve other engines; it’s what will happen with MathML Core in various browsers; and it could very well be what happens with other features in the future. Igalia would love nothing more than to see more and more projects launch, even if they don’t get hired to do the work for a single one of them. This isn’t us spackling over the cracks of browser teams’ neglect. This is us trying to chart an entirely new way to advance browser engines.

I go deeper into all of the above, as well as how Open Prioritization is designed to be an open forum and not some private reserve of Igalia’s, in a 17-minute talk delivered at W3C TPAC in fall 2021, available and captioned on Igalia’s YouTube channel. This post sort of summarizes it, but there are more examples and details in the talk, so if you’re interested, please do check that out.

Just in case your eyes sort of glazed and you skipped to the end to see if there was a TL;DR, here it is:

The addition of :focus-visible to WebKit was lead by the community, done by Igalia, and contributed to WebKit without any involvement from Apple except in the sense of their reviewing patches and accepting the contributions. Many of us are mad at Apple for a lot of good reasons, but please don’t let the process of venting that anger tar the goals and achievements of Open Prioritization. The future browser-feature priority you save may be your own.


Better Image Optimization by Restricting the Color Index

Published 3 years, 5 months past

Let’s talk about image optimization.  There are a few images used in meyerweb’s new design, and while I wanted them to be pleasing to the eye, I also wanted them to be lightweight.  My rough goal was to not have the design elements (images plus CSS) be more than half the total page weight for a typical blog post, not counting any post-specific images like photos or diagrams.  Thus, if a typical blog post’s page weight was 500KB, I didn’t want the images and CSS to add up to more than 250KB or so.

Spoiler: I achieved my goal, but at the same time fell short.  What I had overlooked was custom fonts, which I’ll get to in a later post.

I found out that how you optimize images matters a whole lot.  Let’s consider one example: the spiral-like image (which, yes, is a quiet callback to past work) at the center of the previous-next links at the bottom of blog posts and archive pages.  After I extracted a full-resolution copy of that particular sketch from pages 13-14 of Hamonshū, Vol. 1 and did a little cleanup with filters and so on, it became a 1.4MB Acorn file.

The full-size image in Acorn.

(Side note: Acorn’s “Transparentomatic” filter was an enormous time-saver on this project — it made dropping out the page texture a breeze, and easily adjustable, without forcing me to create and retouch mask layers and whatnot.  Thanks, Flying Meat!)

With the image ready to be tested in-browser, I would use Acorn’s Web Export dialog to save it as a PNG.  The nice thing about this dialog is its built-in Resize feature, which let me keep the Acorn file at its native size (almost a thousand pixels on each side) and export it to the size I wanted — in this case, 200 pixels across.  I did this sort of thing a lot, because I tested a variety of images for every design element.

Once I settled on the image I wanted, I’d drop it on ImageOptim to optimize it.  This usually slammed all eight cores in my aged laptop’s CPU for a good few seconds, and resulted in up to 5% size savings.

That last paragraph probably looks like an indictment of ImageOptim, but wait!  It’s fully redeemed by the end of the post, by which time I will have indicted myself instead.

At this point, my spiral image had gone from a 1.4MB Acorn file to a 30KB PNG.  That’s pretty good, even if 30KB feels a tad bulky for a 200×200 image.  I just assumed, what with all the transparency and shades of color and all that, it wasn’t too far out of line.  But as the whole design started to come together, I discovered that when you added up all the illustration images on, say, the home page, I’d let image bloat sneak up on me in the worst way.  They were totalling close to a megabyte, and they’d all been through ImageOptim already.

I went back to Acorn to see if I could squeeze any more out of the file size, maybe convert some of the images to JPGs if they didn’t need the transparency.  As I flipped between file formats in the Web Export dialog, I noticed something I’d previously overlooked in the PNG export options: a bit depth slider.  I’d been saving the PNGs with no bit depth restrictions, meaning the color table was holding space for 224 colors.  That’s… a lot of colors, roughly 224 of which I wasn’t actually using.

When I clicked the “Index PNG Colors” checkbox and changed the slider until I started getting dithers or obvious color loss, then brought it up a notch or two, the difference was astounding.  Instead of a 30KB file, I got a 4.4 KB file.  Instead of saving at 75% the original size, it was now 11%.

Wait a second… how long has THAT been there?

So I went back through the directory with all my design elements and repeated the process.  I do have batch image processing software installed, but I elected to do this manually so I could pick the best color depth for each file by eye.  It could be that some would be okay at 4 colors instead of 8; others might need 16 or 32 to retain visual fidelity.  Fortunately for me, I only had a couple dozen images to go through, but it would have been worth it even at 10 times that many.

Once I’d gone through all the images and saved them with restricted color depth, my theme’s image directory was down to 242KB total.  The biggest of them, the separator wave illustrations, had gone from being ~150KB each to ~25KB each — all five of them together now totalled less than just one of them had before I did the color indexing.

The directory, well and fully smooshed.

At this point, I thought, “All right, let’s see what ImageOptim does with these.”  It squeezed them down even further, taking my total from 242KB to 222KB, a nine percent reduction.  Which is to say, the percentage savings I got on these already-small files was larger than I’d been getting on the much bigger files — plus, ImageOptim processed them quickly and with a minimum of CPU slamming.  Which is honestly pretty great, given the age of my laptop (about seven years).

So: did I meet my performance goal?  As I said at the outset, yes, but also no.  For a single blog post with around 10KB of text content and no embedded media, the page weight is around 460KB, with the size varying a bit depending on how much markup is needed for the 10KB of text.  (Here’s one recent example with some content variety.)  Of that, the CSS, split across two files, totals 35.2KB.  The images add up to 102.9KB.  Add them together, and you get just a hair over 138KB, or right around 30%.  Huge success!

Except I hadn’t factored in custom fonts, which all by themselves currently total 203.6KB (44% total page weight), mostly due to the three faces of IM Fell I’m using.  That’s right: The fonts weigh more than the CSS and images put together.  Once they’re added in with the CSS and images, the design elements end up being almost 75% the total page weight — about 341.6KB of the 460KB total.  Most of the rest is the 104.4KB chewed up by showdown.js, the enhancement script I’m using to allow the use of Markdown in post comments.

Thus, next up on my performance quest is looking into subsetting the fonts I’m using in order to get their weight down, and finding out if there’s anything I can do to subset Showdown as well.

But as of now, I’m well pleased with where I ended up on image optimization.  I just need to go back and do the same for post-specific images I’d left at unrestricted color depths, where I anticipate a similar 90% savings in file sizes.  If you’ve got a lot of images, particularly PNGs, try running them through a process that lets you restrict the color depth, and see how much it saves.  The results might surprise you!


Get Static

Published 3 years, 6 months past

If you are in charge of a web site that provides even slightly important information, or important services, it’s time to get static.  I’m thinking here of sites for places like health departments (and pretty much all government services), hospitals and clinics, utility services, food delivery and ordering, and I’m sure there are more that haven’t occurred to me.  As much as you possibly can, get it down to static HTML and CSS and maybe a tiny bit of enhancing JS, and pare away every byte you can.

Because too many sites are already crashing because their CMSes can’t keep up with the traffic surges.  And too many sites are using dynamic frameworks that drain mobile batteries and shut out people with older browsers.  That’s annoying and counter-productive in the best of times, but right now, it’s unacceptable.  This is not the time for “well, this is as performant as our stack gets, so I guess users will have to live with it”.  Performance isn’t just something to aspire to any more.  Right now, in some situations, performance could literally be life-saving to a user, or their family.

We’re in this to serve our users.  The best service you can render at this moment is to make sure they can use your site or service, not get 502/Bad Gateway or a two-minute 20%-battery-drain page render.  Everything should take several back seats to this effort.

I can’t tell you how best to get static — only you can figure that out.  Maybe for you, getting static means using very aggressive server caching and a cache-buster approach to updating info.  Maybe it means using some kind of static-render plugin for your CMS.  Maybe is means accelerating a planned migration to a static-site CMS like Jekyll or Eleventy or Grav.  Maybe it means saving pages as HTML from your browser and hand-assembling a static copy of your site for now.  There are a lot of ways to do this, but whatever way you choose, do it now.


Addendum: following are a few resources that can help. Please suggest more in the comments!


CWRU2K

Published 3 years, 8 months past

Before I tell you this story of January 1st, 2000, I need to back things up a few months into mid-1999.  I was working at Case Western Reserve University as a Hypermedia Systems Specialist, which was the closest the university’s job title patterns could get to my actual job which was, no irony or shade, campus Webmaster.  I was in charge of www.cwru.edu and providing support to departments who wanted a Web presence on our server, among many other things.  My fellow Digital Media Services employees provided similar support for other library and university systems.

So in mid-1999, we were deep in the throes of Y2K certification.  The young’uns in the audience won’t remember this, but to avoid loss of data and services when the year rolled from 1999 to 2000, pretty much the entire computer industry was engaged in a deep audit of every computer and program under our care.  There’s really been nothing quite like it, before or since, but the job got done.  In fact, it got done so well, barely anything adverse happened and some misguided people now think it was all a hoax designed to extract hefty consulting fees, instead of the successful global preventative effort it actually was.

As for us, pretty much everything on the Web side was fine.  And then, in the middle of one of our staff meetings about Y2K certification, John Sully said something to the effect of, “Wouldn’t it be funny if the Web server suddenly thought it was 1900 and you had to use a telegraph to connect to it?”

We all laughed and riffed on the concept for a bit and then went back to Serious Work Topics, but the idea stuck in my head.  What would a 1900-era Web site look like?  Technology issues aside, it wasn’t a complete paradox: the ancestor parts of CWRU, the Case Institute of Technology and the Western Reserve University, had long existed by 1900 (founded 1880 and 1826, respectively).  The campus photos would be black and white rather than color, but there would still be photos.  The visual aesthetic might be different, but…

I decided so make it a reality, and CWRU2K was born.  With the help of the staff at University Archives and a flatbed scanner I hauled across campus on a loading dolly, I scanned a couple dozen photos from the period 1897-1900 — basically, all those that were known to be in the public domain, and which depicted the kinds of scenes you might put on a Web site’s home page.

Then I reskinned the home page to look more “old-timey” without completely altering the layout or look.  Instead of university-logo blues and gold, I recolored everything to be wood-grain.  Helvetica was replaced with an “Old West” font in the images, of which there were several, mostly in the form of MM_swapimage-style rollover buttons.  In the process, I actually had to introduce two Y2K bugs to the code we used to generate dates on the page, so that instead of saying 2000 they’d actually say 1899 or 1900.  I altered other things to match the time, like altering the phone number to use two-letters-then-numbers format while still retaining full international dialing information and adding little curlicues to things.  Well before the holidays, everything was ready.

The files were staged, a cron job was set up, and at midnight on January 1st, 2000, the home page seamlessly switched over to its 1900 incarnation.  That’s a static snapshot of the page, so the picture will never change, but I have a gallery of all the pictures that could appear, along with their captions, which I strove to write in that deadpan stating-the-obvious tone the late 19th Century always brings to my mind.  (And take a close look at the team photo of The Rough Riders!)

In hindsight, our mistake was most likely in adding a similarly deadpan note to the home page that read:

Year 2000 Issues

Despite our best efforts at averting Y2K problems, it seems that our Web server now believes that it is January of 1900. Please be advised that we are working diligently on the problem and hope to have it fixed soon.

I say that was a mistake because it was quoted verbatim in stories at Wired and The Washington Post about Y2K glitches.  Where they said we’d actually suffered a real, unintentional Y2K bug, with Wired giving us points for having “guts” in publicly calling “a glitch a glitch”.  After I emailed both reporters to explain the situation and point them to our press release about it, The Washington Post did publish a correction a few days later, buried in a bottom corner of page A16 or something like that.  So far as I know, Wired never acknowledged the error.

CWRU2K lasted a little more than a day.  Although we’d planned to leave it up until the end of January, we were ordered to take it down on January 2nd.  My boss, Ron Ryan, was directed to put a note in my Permanent Record.  The general attitude Ron conveyed to me was along the lines of, “The administration says it’s clever and all, but it’s time to go back to the regular home page.  Next time, we need to ask permission rather than forgiveness.”

What we didn’t know at the time was how close he’d come to being fired.  At Ron’s retirement party last year, the guy who was his boss on January 2nd, 2000, Jim Barker, told Ron that Jim had been summoned that day to a Vice President’s office, read the riot act, and was sent away with instructions to “fire Ron’s ass”.  Fortunately, Jim… didn’t.  And then kept it to himself for almost 20 years.

There were a number of other consequences.  We got a quite a bit of email about it, some in on the joke, others taking it as seriously as Wired.  There’s a particularly lovely note partway down that page from the widow of a Professor Emeritus, and have to admit that I still smile over the props we got from folks on the NANOG mailing list.  I took an offer to join a startup a couple of months later, and while I was probably ready to move on in any case, the CWRU2K episode — or rather, the administration’s reaction to it — helped push me to make the jump.  I was probably being a little juvenile and over-reacting, but I guess you do that when you’re younger.  (And I probably would have left the next year regardless, when I got the offer to join Netscape as a Standards Evangelist.  Actual job title!)

So, that’s the story of how Y2K affected me.  There are some things I probably would have done differently if I had it to do over, but I’m 100% glad we did it.

  • CWRU2K was published on .
  • It was assigned to the History and Web categories.
  • There has been one reply.

Securing Web Sites Made Them Less Accessible

Published 5 years, 1 month past

In the middle of last month (July 2018), I found myself staring at a projector screen, waiting once again to see if Wikipedia would load.  If I was lucky, the page started rendering 15-20 seconds after I sent the request.  If not, it could be closer to 60 seconds, assuming the browser didn’t just time out on the connection.  I saw a lot of “the server stopped responding” over the course of a few days.

It wasn’t just Wikipedia, either.  CNN International had similar load times.  So did Google’s main search page.  Even this here site, with minimal assets to load, took a minimum of 10 seconds to start rendering.  Usually longer.

In 2018?  Yes.  In rural Uganda, where I was improvising an introduction to web development for a class of vocational students, that’s the reality.  They can have a computer lab full of Dell desktops running Windows or rows of Raspberry Pis running Ubuntu or whatever setup there is, but when satellites in geosynchronous earth orbit are your only source of internet, you wait.  And wait.  And wait.

I want to explain why — and far more importantly, how we’ve made that experience interminably worse and more expensive in the name of our comfort and security.

First, please consider the enormously constrained nature of satellite internet access.  If you’re already familiar with this world, skip ahead a few paragraphs; but if not, permit me a brief description of the challenges.

For geosynchronous-satellite internet access, the speed of light become a factor in ping times: just having the signals propagate through a mixture of vacuum and atmosphere chews up approximately half a second of travel time over roughly 89,000 miles (~152,000km).  If that all that distance were vacuum, your absolute floor for ping latency is about 506 milliseconds.

That’s just the time for the signals to make two round trips to geosynchronous orbit and back.  In reality, there are the times to route the packets on either end, and the re-transmission time at the satellite itself.

But that’s not the real connection killer in most cases: packet loss is.  After all, these packets are going to orbit and back.  Lots of things along those long and lonely signal paths can cause the packets to get dropped.  50% packet loss is not uncommon; 80% is not unexpected.

So, you’re losing half your packets (or more), and the packets that aren’t lost have latency times around two-thirds of a second (or more).  Each.

That’s reason enough to set up a local caching server.  Another, even more pressing reason is that pretty much all commercial satellite connections come with data caps.  Where I was, their cap was 50GB/month.  Beyond that, they could either pay overages, or just not have data until the next month.  So if you can locally cache URLs so that they only count against your data usage the first time they’re loaded, you do that.  And someone had, for the school where I was teaching.

But there I stood anyway, hoping my requests to load simple web pages would bear fruit, and I could continue teaching basic web principles to a group of vocational students.  Because Wikipedia wouldn’t cache.  Google wouldn’t cache.  Meyerweb wouldn’t cache.  Almost nothing would cache.

Why?

HTTPS.

A local caching server, meant to speed up commonly-requested sites and reduce bandwidth usage, is a “man in the middle”.  HTTPS, which by design prevents man-in-the-middle attacks, utterly breaks local caching servers.  So I kept waiting and waiting for remote resources, eating into that month’s data cap with every request.

The drive to force every site on the web to HTTPS has pushed the web further away from the next billion users — not to mention a whole lot of the previous half-billion.  I saw a piece that claimed, “Investing in HTTPS makes it faster, cheaper, and easier for everyone.”  If you define “everyone” as people with gigabit fiber access, sure.  Maybe it’s even true for most of those whose last mile is copper.  But for people beyond the reach of glass and wire, every word of that claim was wrong.

If this is a surprise to you, you’re by no means alone.  I hadn’t heard anything about it, so I asked a number of colleagues if they knew about the problem.  Not only had they not, they all reacted the same way I did: this must not be an actual problem, or we’d have heard about it!  But no.

Can we do anything?  For users of up-to-date browsers, yes: service workers create a “good” man in the middle that sidesteps the HTTPS problem, so far as I understand.  So if you’re serving content over HTTPS, creating a service worker should be one of your top priorities right now, even if it’s just to do straightforward local caching and nothing fancier.  I haven’t gotten one up for meyerweb yet, but I will do so very soon.

That’s great for modern browsers, but not everyone has the option to be modern.  Sometimes they’re constrained by old operating systems to run older browsers, ones with no service-worker support: a lab full of Windows XP machines limited to IE8, for example.  Or on even older machines, running Windows 95 or other operating systems of that era.  Those are most likely to be the very people who are in situations where they’re limited to satellite internet or other similarly slow services with unforgiving data caps.  Even in the highly-wired world, you can still find older installs of operating systems and browsers: public libraries, to pick but one example.  Securing the web literally made it less accessible to many, many people around the world.

Beyond deploying service workers and hoping those struggling to bridge the digital divide make it across, I don’t really have a solution here.  I think HTTPS is probably a net positive overall, and I don’t know what we could have done better.  All I know is that I saw, first-hand, the negative externality that was pushed onto people far, far away from our data centers and our thoughts.

My thanks to Tim Kadlec and Ethan Marcotte for their feedback and insight while I was drafting this post, and to Lara Hogan and Aaron Gustafson for their early assistance wth my research.


CSS: The Definitive Guide, 4th Edition

Published 6 years, 2 months past

On Monday, July 3rd, as I sat in the living room of a house just a bit north of New York City, I pushed the last writing and editing changes to CSS: The Definitive Guide, Fourth Edition and notified the production department at O’Reilly that it was ready.

All twenty chapters, three appendices, and associated front matter are now in their hands.

It’s been a long and difficult journey to get here.  Back in 2011-2012, I started updating chapters and releasing them as standalone books, for those who wanted to grab specific topics early.  In mid-2013, I had to stop all work on the book, and wasn’t really able to get back into it until mid-2015.  At that point, I realized that several new chapters had to be added — for example, when I started out on this edition, Flexbox and Grid were pie-in-the-sky ideas that might or might not come to pass.  Feature queries weren’t a thing, back then.  Filters and masks and blend modes were single-browser at best, when I started out.  And forget about really complex list counters.

Now all those topics (and more!) have chapters, or at least major sections.  Had I not been delayed two years, those topics might not have made it into the fourth edition.  Instead, they’re in there, and this edition may well end up twice as long as the previous edition.

I also might not have brought on a co-author, the inestimable Estelle Weyl.  If not for her contribution in new material and her close, expert review of the chapters I’d already written, this book might have been another year in the making.  The Guide was always my baby, but I couldn’t be happier that I decided to share it with Estelle, nor prouder that her name will be on the cover with mine.

Speaking of major changes, I probably wouldn’t have learned AsciiDoc, nor adopted Atom as an authoring environment (I still use BBEdit for heavy-lift text processing, as well as most of my coding).  O’Reilly used to be a “give us your Word docs!” shop like everyone else, but that toolchain doesn’t really exist any more, from what I can tell.  In fact, the first few chapters I’d given them were in Word.  When I finally returned to writing, they had to give me those chapters back as AsciiDoc exports, so I could make updates and push them to O’Reilly’s internal repository.  The files I created to create figures in the book went into their own public repository, which I’ll get to reorganizing once the text is all settled and the figure numbers are locked in.  (Primary to do: create chapter lists of figures, linked to the specific files that were used to create those figures.  Secondary to do: clean up the cruft.)

As of this moment, the table of contents is:

  • Preface
  1. CSS and Documents
  2. Selectors
  3. Specificity and the Cascade
  4. Values, Units, and Colors
  5. Fonts
  6. Text Properties
  7. Basic Visual Formatting
  8. Padding, Borders, Outlines, and Margins
  9. Colors, Backgrounds, and Gradients
  10. Floating and Shapes
  11. Positioning
  12. Flexible Box Layout
  13. Grid Layout
  14. Table Layout in CSS
  15. Lists and Generated Content
  16. Transforms
  17. Transitions
  18. Animation
  19. Filters, Blending, Clipping, and Masking
  20. Media-Dependent Styling
  • Appendix A: Animatable Properties
  • Appendix B: Basic Property Reference
  • Appendix C: Color Equivalence Table

Disclaimer: the ordering and titles could potentially change, though I have no expectation of either.

I don’t have a specific timeline for release as yet, but as soon as I get one, I’ll let everyone know in a post here, as well as the usual channels.  I expect it to be relatively speedy, like the next couple of months.  Once production does their thing, we’ll get it through the QC process — checking to make sure the figures are in the right places and sizes, making sure no syntax formatting got borked, that kind of thing — and then it’ll be a matter of getting it out the door.

And just in case anyone saw there was news about O’Reilly’s change in distribution and is wondering what that means: you can still buy the paper book or the e-book from your favorite retailer, whether that’s Amazon or someone else.  You just won’t be able to buy direct from O’Reilly any more, except in the sense that subscribing to their Safari service gives you access to the e-book.  That does mean a tiny bit less in royalties for me and Estelle, since direct paper sales were always the highest earners.  Then again, hardly anyone ever bought their paper copies direct from O’Reilly, so honestly, the difference will be negligible.  I might’ve been able to buy an extra cup of coffee or two, if I drank coffee.

It feels…well, honestly, it feels weird to have finally reached this point, after such a long time.  I wish I’d gotten here sooner for a whole host of reasons, but this is where we are, and regardless of anything else, I’m proud of what Estelle and I have created.  I’m really looking forward to getting into your hands.


Practical CSS Grid

Published 6 years, 6 months past

…In the run-up to Grid support being released to the public, I was focused on learning and teaching Grid, creating test cases, and using it to build figures for publication.  And then, March 7th, 2017, it shipped to the public in Firefox 52.  I tweeted and posted an article and demo I’d put together the night before, and sat back in wonderment that the day had finally come to pass.  After 20+ years of CSS, finally, a real layout system, a set of properties and values designed from the outset for that purpose.

And then I decided, more or less in that moment, to convert my personal site to use Grid for its main-level layout.

Me, writing for A List Apart, taking you on a detailed, illustrated walkthrough of how I added CSS Grid layout to meyerweb.com, while still leaving the old layout in place for non-Grid browsers.  As I write this, Grid is available in the latest public releases of Firefox, Chrome, and Opera, with Safari likely to follow suit within the next few weeks.  Assuming the last holds true, that’s four major browsers shipping major support in the space of one month.  As Jen Simmons hashtagged it, it’s a new day in browser collaboration.

As I’ve said before, I understand being hesitant.  Based on our field’s history, it’s natural to assume that Grid as it stands now is buggy, incomplete, and will have a long ramp-up period before it’s usable.  I am here to tell you, as someone who was there for almost all of that history, Grid is different.  There are areas of incompleteness, but they’re features that haven’t been developed yet, not bugs or omissions.  I’m literally using Grid in production, right now, on this site, and the layout is fine in both Grid browsers and non-Grid browsers (as the article describes).  I’m very likely to add it to our production styles over at An Event Apart in the near future.  I’d probably have done so already, except every second of AEA-related work time I have is consumed by preparations for AEA Seattle (read: tearing my new talk apart and putting it back together with a better structure).

Again, I get being wary.  I do.  We’re used to new CSS stuff taking two years to get up to usefulness.  Not this time.  It’s ready right now.

So: dive in.  Soak up.  Enjoy.  Go forth, and Grid.


Doubled Grids

Published 6 years, 6 months past

Chrome 57 released yesterday, not quite a week ahead of schedule, with Grid support enabled.  So that’s two browsers with Grid support in the space of two days, including the most popular browser in the world right now.  Safari has it enabled in Technology Preview builds, and just blogged an introduction to Grid, so it definitely feels like it’ll be landing there soon as well.  No word from Edge, so far as I know.

I did discover a Chrome bug in Grid this morning, albeit one that might be fairly rare.  I filed a bug report, but the upshot is this: most or all of an affected page is rendered, and then gets blanked.  I ran into a similar bug earlier this year, and it seemed to affect people semi-randomly — others with the same OS as me didn’t see it, and others with different OSes did see it.  This leads me to suspect it’s related to graphics cards, but I have no proof of that at all.  If you can reproduce the bug, and more importantly come up with a reliable way to fix it, please comment on the Chromium bug!


Browse the Archive

Earlier Entries

Later Entries