By most measures, I’ve had a pretty damn successful career. I’m not at “I can retire today” money and nobody’s erecting statues with my visage on them, but only the first of those holds any interest for me, and I’m not expecting it any time soon. (At current rates of saving and investment return, I should reach that state… right around the traditional age of retirement, actually.)
Of course, I’ve written a bunch of books that earned me some royalties, but books are not a way to become wealthy, unless you’re crazy lucky. Yes, you have to put in the work to write the book, but in the end, whether your book makes you coffee money or high-end-chrome coffee machine money is down to forces entirely outside your control. Certainly outside mine. When I wrote my first CSS book, nobody expected CSS to be more than a slowly dying niche technology. When I wrote the second, CSS had been declared dead twice over. When I wrote the third and fourth, it was just starting to revive.
I invested tons of effort and time into understanding CSS, and then to explaining it. Because I was lucky enough to put that work toward a technology that turned out to be not just successful, but deeply important to the web, the work paid off. But think of the people who put that same kind of time and effort into understanding and explaining DSSSL. “Into what, now?” you say. Exactly.
Similarly, when Jeffrey and I set out to create An Event Apart, there was no assurance that there was a viable market there. Nearly all the old web conferences had died, and those few that remained were focused on audience very much unlike the one we had in mind. Luckily for us, the audience existed. We worked really hard—still work really hard—to find and speak to that audience with the topics and speakers we present, but it would all have come to nothing if not for the sheer luck of having an audience for the kind of show we wanted to create.
For most of my adult life, I’ve been keenly aware of the incredible amount of luck that goes into success, and the awareness only grows as the years pass by. Just putting in a lot of hard work isn’t enough. You also have to have the sheer good fortune to have that hard work pay off. You can sink everything you have, money and soul, into building a place in life, only to have it all sabotaged and swept away by random chance. You can invest very bit of your life and fortune into an outcome that blind fate renders impossible.
So yes, I worked hard to understand the web, and to explain the web, and to write books and talks, and to create a conference series, and everything else I’ve done over the years—but I was supremely lucky to have that work come to something. An incredible combination of time and place and interest and birth and a million million other things made that possible.
More to the point, the existence of people interested in what I have to say made that possible. So I thank you, one and all, for all that and still more. Thank you for rewarding and redeeming the work I’ve done. Thank you for being of like mind. Thank you for your support. Thank you for listening. Thank you.
Throughout 2015, a few people who’ve seen me present “Designing for Crisis” at An Event Apart have noticed that, on the slides where I have filler text, it’s a localized variant. In Washington, DC, for example, one section started out:
Andrew ellicott lobortis decima thomas jefferson vulputate dynamicus fiant kingman park sollemnes ford’s theater. Vero videntur modo claritatem possim quis quod est noma howard university consequat diam. Blandit ut claram north michigan park seacula judiciary square william jefferson clinton hawthorne millard fillmore iis…
This was a product of some simple PHP I’d originally written to generate Cleveland-themed filler text a year or so back, which you can find at localipsum.meyerweb.com, and which I’d expanded upon to let me generate text for my presentations at AEA. The name comes from the original idea I had, which was to provide a list of cities/regions/whatever, so that users could pick one and generate some filler text. That never quite came together. I had a semi-working version once, but the UI was horrible and the file management was worse and I got discouraged and rolled back to what you see now.
I kept telling myself that I’d get back to it, do the city-selection thing right, polish it nicely, and then finally release the source. I’ve told myself that all year, as I manually swapped in city information to generate the filler text for each of my talks. Now I’ve finally admitted to myself that it isn’t going to happen, so: here’s the source. I chose a pretty permissive license—BSD-ISC, if I recall correctly—so feel free to make use of it for your own filler text. I’ll be happy to accept pull requests with improvements, but not package-management or complete MVC restructuring. Sorry.
I know, it’s a goofy little thing and the code is probably pants, but I kinda like it and figure maybe someone out there will too. If nothing else, we can look for a few laughs in the output and maybe—just maybe—learn a little something about ourselves along the way.
(P.S. Speaking of “Designing for Crisis”, look for a post about that, and more specifically video of it, in the next few weeks.)
If there’s one thing that’s made it possible for me to learn as much as I have, and create as much as I have, it’s that my default attitude about things, especially technical things, is that I’m probably wrong about them.
When I first took up CSS and it didn’t do what I expected from reading the spec, I started creating simple, focused tests of each property and its values, to figure out what I was getting wrong. Because I wanted to be sure, I built tests for all the properties, even the ones I was confident about understanding—and, in places, found out my confidence was misplaced. Eventually, those tests became the CSS1 Test Suite. Since I had discovered that, in a lot of cases, the browsers were actually wrong, I decided to document CSS support in browsers. That became the CSS Mastergrid (long since gone). On the strength of that resource, I started writing articles to explain how things worked, or didn’t, which led to writing my first book. And so on.
But it all started because I assumed I was wrong about how CSS should work, not that the browsers were fundamentally broken. Simple test cases seemed like the best way to find out. One thing led to another. In a lot of ways, you could say that my career was made possible by me assuming I was wrong, and setting out to determine exactly how wrong I was.
It’s not that I want to be wrong; in fact, I dislike being wrong. But I dislike continuing to be wrong much more, so I try to find out how I’m wrong, in hopes of becoming less wrong. It’s not even “strong opinions, weakly held”—it’s more “strong suspicion of error, strongly pursued”. In public, when necessary. (This is where it helps to be willing to look like a dork, or even a fool, as Kitt wrote about yesterday.)
When asking for help, this is the approach I take. When I post to mailing lists or forums, it usually comes out as, “Here’s what I think is so, but results don’t match that understanding. What am I missing? Please help me get it right.”
Everyone has their own idiosyncratic collection of tools they can’t work without, and I’ve recently been using one of mine as I produce figures for CSS: The Definitve Guide, Fourth Edition (CSS:TDG4e). It’s Firefox’s command-line screenshot utility.
To get access to screenshot, you first have to hit ⇧F2 for the Developer Toolbar, not⌥⌘K for the Web Console. (I know, two command lines—who thought that was a good idea? Moving on.) Once you’re in the Developer Toolbar, you can type s and then hit Tab to autocomplete screenshot. Then type a filename for your screenshot, if you want to define it, either with or without the file extension; otherwise you’ll get whatever naming convention your computer uses for screen captures. For example, mine does something like Screen Shot 2015-10-22 at 10.05.51.png by default. If you hit [return] (or equivalent) at this point, it’ll save the screenshot to your Downloads folder (or equivalent). Done!
Except, don’t do that yet, because what really makes screenshot great is its options; in my case, they’re what elevate screenshot from useful to essential, and what set it apart from any screen-capture addon I’ve ever seen.
The option I use a lot, particularly when grabbing images of web sites for my talks, is --fullpage. That option captures absolutely everything on the page, even the parts you can’t see in the browser window. See, by default, when you use screenshot, it only shows you the portion of the page visible in the browser window. In many cases, that’s all you want or need, but for the times you want it all, --fullpage is there for you. Any time you see me do a long scroll of a web page in a talk, like I did right at the ten-minute mark of my talk at Fluent 2015, it was thanks to --fullpage.
If you want the browser --chrome to show around your screenshot, though, you can’t capture the --fullpage. Firefox will just ignore the -fullpage option if you invoke --chrome, and give you the visible portion of the page surrounded by your browser chrome, including all your addon icons and unread tabs. Which makes some sense, I admit, but part of me wishes someone had gone to the effort of adding code to redraw the chrome all the way around a --fullpage capture if you asked for it.
Now, for the purposes of CSS:TDG4e’s figures, there are two screenshot options that I cannot live without.
The first is --selector, which lets you supply a CSS selector to an element—at which point, Firefox will capture just that element and its descendants. The only, and quite understandable, limitation is that the selector you supply must match a single element. For me, that’s usually just --selector 'body', since every figure I create is a single page, and there’s nothing in the body except what I want to include in the figure. So instead of trying to drag-select a region of the screen with ⇧⌘4, or (worse) trying to precisely size the browser window to show just the body element and not one pixel more, I can enter something like screenshot fig047 --selector 'body' and get precisely what I need.
That might seem like a lot to type every time, but the thing is, I don’t have to: not only does the Web Toolbar have full tab-autocomplete, the Toolbar also offers up-arrow history. So once I’ve tab-completed the command to capture my first figure, I just use the up arrow to bring the command back and change the file name. Quick, simple, efficient.
The second essential option for me is --dpr, which defines a device pixel ratio. Let’s say I want to capture something at four times the usual resolution. --dpr 4 makes it happen. Since all my figures are meant to go to print as well as ebooks, I can capture at print-worthy resolutions without having to use ⌘+ to blow up the content, or fiddle with using CSS to make everything bigger. Also if I want to go the other way and capture a deliberately pixellated version of a page, I can use something like --dpr 0.33.
I have used this occasionally to size down an image online: I “View Image” to get it in its own window, then use screenshot with a fractional DPR value to shrink it. Yes, this is a rare use case, even for me, but hey—the option exists! I haven’t used the DPR option for my talks, but given the growing use of HD 16:9 projectors—something we’ve been using at An Event Apart for a while now, actually—I’m starting to lean toward using --dpr 2 to get sharper images.
And that’s not all! You can set a --delay in seconds, to make sure a popup menu or other bit of interaction is visible before the capture happens. If you want to take your captured image straight into another program before saving it, there’s --clipboard. And there’s an option to upload straight to --imgur, though I confess I haven’t figured out how that one works. I suspect you have to be logged into imgur first. If anyone knows, please leave a comment so the rest of us know how to use it!
The one thing that irks me a little bit about screenshot is that the file name must come before the options. When I’m producing a bunch of figures in a row, having to drag-select just the file name for replacement is a touch tedious; I wish I could put the file name at the end of the command, so I could quickly drag-select it with a rightward wrist-flick. But all things considered, this is a pretty minor gripe.
The other thing I wish screenshot could do is let me define a precise width or height in pixels—or, since I’m dreaming, a value using any valid CSS length unit—and scale the result to that measure. This isn’t really useful for the CSS:TDG4e figures, but it could come in pretty handy for creating talk slides. No, I have no idea how that would interact with the DPR option, but I’d certainly be willing to find out.
So that’s one of my “unusual but essential” tools. What’s yours?
It’s been a busy couple of weeks for Facebook, in terms of compassionate design decisions.
First they announced that they aren’t adding a Dislike button, but they are adding a set of six emoji reactions to the “Like” button, so you can indicate a wider range of emotion. Some people immediately linked this to Slack, as if emoji reactions hadn’t been a thing on social media for the last couple of years. I happened to see Sally Herships asking “what are your thoughts?” about it on Twitter (heh), and oh, I had thoughts. I ended up sharing some of those thoughts by phone, and one of them was part of a segment on American Public Media’s Marketplace.
It’s funny, in a way, that my thought on marketing and advertisers was what made it into the piece, because I think that was literally my whole thought about that side of things. Most of the rest of my conversation with Sally was about how Facebook could use these reactions as a way to avoid insensitive design choices. As an example, a status update that gets lots of interaction in the frowny-face or sad-face realm could be avoided when it comes to things like Year in Review. I said something to the effect of:
People are sharing everything about their lives, positive and negative, billions of us every day. That isn’t going to stop, so it’s great to see Facebook making changes to meet us where we are, or at least meet us partway.
These reaction emoji almost certainly aren’t the last word on this, but they’re a credible initial attempt. In more than one sense, they’re a first step into a larger world.
Next, Facebook introduced filtering for its On This Day (OTD) feature. This is another step in the evolution of On This Day, one that’s very welcome. Facebook had already been revising its language to be more humane, shifting from simple “Relive this memory” to nuanced language expressing care and openness.
With its new OTD preferences, Facebook now lets you define ranges of dates you’d like to be blacklisted, in effect, as well as people you don’t want to see memories about. I’d commented on the lack of this, back when OTD launched:
…what I notice here is what’s missing: I don’t see any reference to an ability to opt out of On This Day, either for certain days or altogether.
So far as I can tell, you still can’t opt out entirely; even if you turn off all notifications, you can still get memories inserted into your timeline. For me, I see about one a month, more or less. But here’s the interesting thing: they’re almost never my memories. In what I still regard as a major gamble by Facebook, On This Day will show you posts, pictures, and videos posted by someone else, but on which you were tagged. I presume (though I have no simple way to test) that adding a person in the OTD filtering preferences will prevent you from seeing memories in which they’re tagged as well as memories they posted.
If so, that’s a really smart step, as I can only imagine how a spiteful ex might abuse OTD. It still leaves open the possibility of old posts that you don’t remember being tagged on suddenly appearing. In many cases, that will be a delightful moment, but in many others, the exact opposite of that. This is why I regard Facebook’s decision to show you posts from other people a gamble. Even if they show unwanted memories to just 1% of their user base—a ridiculously low percentage—that’s literally 10 million people a day.
Still: wrinkles or no, flaws or no, the presence of filtering preferences is a major enhancement to On This Day. I could block out all of June 2014, if I so chose. There might be years where I blocked it, and others where I removed the block. The important thing is that I’m being given that capability, in an environment that’s already designed to show me memories and acknowledge that it’s easy to get that wrong. The user experience for adding filters is still clunky, but much like the reaction emoji, I view this as a credible first try, not the final word.
All this has made for some interesting Slack discussions between me and Sara, as we literally just finished the manuscript for our forthcoming, still-not-quite-titled-but-we’re-really-close-honest book on compassion in design. Which has references to things like On This Day, so we’re already revising a book that hasn’t even been published yet. And when will it be published? We’re pulling for early next year, which sounds like a long way away until you remember that 2015 is getting close to done.
Kudos to Facebook, both for its efforts to be kinder in what they do and for its willingness to try. Not many businesses, let alone social-media titans, have had the courage to think about what can go wrong in this realm, let alone actually acknowledge missteps and work to do better. Well done.
It’s time for a semi-periodic update on CSS: The Definitive Guide, 4th Edition! The basic news is that things are proceeding, albeit slowly. Eight chapters are even now available as ebooks or, in most cases, print-on-demand titles. Behold:
CSS and Documents, which covers the raw basics of how CSS is associated with HTML, including some of the more obscure ways of strapping external styles to the document as well as media query syntax. It’s free to download in any of the various formats O’Reilly offers.
Selectors, Specificity, and the Cascade, which combines two chapters to cover all of the various Level 3 selector patterns as well as the inner details of how specificity, inheritance, and cascade.
Values, Units and Colors, which covers all the various ways you can label numbers as well as use strings. It also takes advantage of the new cheapness of color printing to use a bunch of nice color-value figures that aren’t forced to be all in grayscale.
CSS Fonts, which dives into the gory details of @font-face and how it can deeply affect the use of font-related properties, both those we use widely as well as many that are quickly gaining browser support.
CSS Text, which covers all the text styles that aren’t concerned with setting the font face—stuff like indenting, decoration, drop shadows, white-space handling, and so on.
Basic Visual Formatting in CSS, which covers how block, inline, inline-block, and other boxes are constructed, including the surprisingly-complicated topic of how lines of text are constructed. Very fundamental stuff, but of course fundamentals are called that for a reason.
Transforms in CSS, which is currently FREE in ebook format, covers the transform property and its closely related properties. 2D, 3D, it’s all here.
Colors, Backgrounds, and Gradients, which covers those three topics in FULL GLORIOUS COLOR, fittingly enough. Curious about the new background sizing options? Ever wondered exactly how linear and radial gradients are constructed? This book will tell you all that, and more.
Here’s what I have planned to write next:
Padding, Borders, Outlines, and Margins — including the surprisingly tricky border-image
Positioning – basically an update, with new and unexpected twists that have been revealed over the years (case in point)
Grid Layout – though this is coming faster than many of us realize, I may put this one off for a little bit while we see how browser implementations go, and find out what changes happen as a result
My co-author, Estelle, has these three chapters/short books currently in process:
Beyond those 14 chapters, we have eight more on the roster, covering topics like floating, multicolumn layout, shapes, and more. CSS is big now, y’all.
So that’s where we are right now. Our hope is to have the whole thing written by the middle of 2016, at which point some interesting questions will have to be answered. While most of the book is fine in grayscale, there are some chapters (like Colors, Backgrounds, and Gradients) that really beenfit from being in color. Printing a 22-chapter book in color would make it punishingly expensive, even with today’s drastically lower cost of color printing. So what to do?
Not to mention, printing a 22-chapter book is its own level of difficulty. Even if we assume an average of 40 pages a chapter—an unreasonabnly low figure, but let’s go with it—that’s still a nine hundred page book, once you add front and back matter. The binding requirements alone gets us into the realm of punishingly expensive, even without color.
Of course, ebook readers don’t have to care about any of that, but some people (like me) really do prefer paper. So there will be some interesting discussions. Print in two volumes? Sell the individual chapter books in a giant boxed set, Chronicles of Narnia style? We’ll see!
How many talks have I given over the years? How many times have I stood at the front of a room, on a stage or in front of a chalkboard or otherwise before an audience, and talked at them for an hour or so?
Lanyrd says 72 as I write this, with two more coming this year. But Lanyrd only goes back to 2003, so I already know it’s missing some of my past appearances. Everything from 1995 (or was it 1996?) through 2003, for example. The talks I’ve done for college classes and user groups in Cleveland. Probably others as well. So let’s round it off to an even one hundred, and pretend like that’s a meaningful milestone or something.
I used to talk about code, style, standards, all that stuff. It was all, as the cliché goes, subjects for which I had prepared not my talk, but myself. I knew the subject so thoroughly, I pretty much never wrote out a script. I wrote an outline, assembled slides or demos or whatever to support that outline, and then mostly improvised my way through the talk. The closest I got to rehearsal was back in 2007, I think, when my talk was two slides in Keynote and then a bunch of pre-created style snippets that I dropped into a live web page, saving and reloading, talking about the changes as I went. Live-coding, except without relying on my sloppy typing skills.
(That one was called “Secrets of the CSS Jedi”, where I took a table of data, marked up as such, and turned it into a bar graph live on stage, the summary line of which I still remember: “CSS does not care what you think an element should look or act like. You have far more power than you realize.” That was a revolutionary thing to say back then. We were coders once, and young.)
These days, my talks are nearly or entirely code-free, as I explore topics like compassion in design, and the ways that our coding has a profound influence on society now and into the future. The talks generally start life as 9,000-word essays that I edit, rearrange, patch up, re-edit, polish, and then rehearse. After the first two rehearsals, I re-re-edit and re-polish. Then I rehearse several more times.
The point of all this being:
I stumble through my rehearsals, getting more and more incoherent, getting more frustrated every time I have to start over, certain I’ll never get the words to work, increasingly convinced it means the ideas behind them have no merit at all, until I want to curl up in a cushion fort and never come out. I grapple with the fear that even if by some miracle I do have one or two worthwhile things to say, they’ll be buried in a flood of stuttered half-sentences and self-protective rhetorical tricks.
So I get nervous before my talks. Adrenaline surges through me, elevating my pulse and making my palms sweat as they get prickly, the cold fire washing up my arms and into my cheeks. I pace and fidget, concentrating on my breathing so I don’t hyperventilate. Or hypoventilate, for that matter.
I do this before every talk I give at An Event Apart, even when I’ve given the talk half a dozen times previously. I did it before I hit the stage at XOXO 2015. I did it before I started my talks at Rustbelt Refresh.
A hundred public talks or more, and it’s still not easy. I’m not sure it ever will be easy. I’m not sure it ever should be easy.
The further point being:
Every speaker I know feels pretty much exactly the same. We don’t all get the same nervous tics, but we all get nervous. We struggle with our fears and doubts. We all feel like we have no idea what we’re doing.
So if you’re afraid to get up in front of people and share what you know: you’re in very, very good company. I know this, because I am too.
If you have something to share—and you do—try not to let the fear stop you.
Content blockers have arrived, as I’m sure you’re aware by now. They’re more commonly referred to as ad blockers, but they’re much more than that, really. In fact, they’re a time machine.
Sound familiar? It should. We’ve been here before. It’s 1995 all over again.
And, just as in 1995, publishers are faced with a landscape where they’re not sure how to make money, or even if they can make money.
Content blockers are a two-decade reset button. We’re right back where we were, twenty years ago. Except this: we already know a bunch of stuff that doesn’t work.
I don’t mean that ads don’t work. Ads can work. We’ve seen small, independent ad networks like The Deck do pretty okay. They didn’t make anyone a billionaire, but they provided a good audience to advertisers via a low-impact mechanism, and some earnings for those who ran the ads and the network.
The ads that are at risk now are the ones delivered via bloated, badly managed, security-risk mechanisms. In other words: what’s at risk here is terrible web development.
Granted, the development of these ads was so terrible that it made the entire mobile web ecosystem appear far more broken that it actually is, and prompted multiple attempts to rein it in. Now we have content blockers, which are basically the nuclear option: if you aren’t going to even attempt to respect your customers, they’re happy to torch your entire infrastructure.
Ethical? Moral? Rational? Hell if I know or care. Content blockers became the top paid apps within hours of iOS9’s release, and remain so. The market is speaking incredibly loudly. It’s almost impossible not to hear it. The roar is so loud, in fact, it’s difficult to make out what people are actually saying.
I have my interpretation of their shouting, but I’m going to keep it to myself. The observation I really want to make is this: the entire industry is being given a do-over here. Not the ad industry; the web industry.
Content blockers strip the web back to what it was 20 years ago. All the same challenges and questions are back, full force. How do we make sites better, smarter, and cooler? How do we make money by publishing online?
There are reputations and probably fortunes to be made by learning from our many mistakes and finding new, smarter ways to move forward. I would advocate that people start with the core principles of the web standards movement, particularly progressive enhancement, but those are starting points, a foundation—just as they always were.
It’s not often that an entire industry gets an almost literal do-over. We have two decades of hindsight to work with now, as we try to figure out how to (re)build a web where users don’t feel like they need content blockers just to be online. This is an incredibly rare and exciting juncture. Let’s not waste it.