I’ve always meant to get back to it and make it more interactive. So over the past several evenings, I’ve rebuilt it as an SVG-based visualization. The main point of doing this was so that when you hover the mouse pointer over one of the little color boxes, it will fill the center of the color wheel with the hovered color and tell you its name and HSL values. Which it does, now. It even tries to guess whether the text should be white or black, in order to contrast with the underlying color. Current success rate on that is about 90%, I think. Calculating perceived visual brightness turns out to be pretty hard!
Other things I either discovered, or want to do better in the future:
Very nearly half the CSS4 (and also CSS3/SVG) color keywords are in the first 90 degrees of hue. More than half are in the first 120 degrees.
There are a lot of light/medium/dark variant names in the green and blue areas of the color space.
I wish I could make the color swatches bigger, but when I do that the adjacent swatches overlap each other and one of them gets obscured.
Therefore, being able to zoom in on parts of the visualization is high on my priority list. All I need is a bit of event monitoring and some viewbox manipulation. Well, that and a bit more time. Done, at least for mouse scroll wheels.
I’d like to add a feature at some point where you type text, and a list is dynamically filtered to show keywords containing what you typed. And each such keyword has a line connecting it to the actual color swatch in the visualization. I have some ideas for how to make that work.
I’d love to create a visualization that placed the color swatches in a 3D cylindrical space summarizing hue, lightness. and saturation. Not this week, though.
I’m almost certain it needs accessibility work, which is also high on my priority list.
SVG needs conic gradients. Or the ability to wrap a linear gradient along/inside/around a shape like a circle, that would work too. Having to build a conic gradient out of 360 individual <path>s is faintly ridiculous, even if you can automate it with JS.
And also z-index awareness. C’mon, SVG, get it together.
I toyed with the idea of nesting elements with borders and some negative margins to pull one border on top of another, or nesting a border inside an outline and then using negative margins to keep from throwing off the layout. But none of that felt satisfying.
It turns out there are a number of tricks to create the effect of stacking one border atop another by combining a border with some other CSS effects, or even without actually requiring the use of any borders at all. Let’s explore, shall we?
That’s from the introduction to my article “Stacked ‘Borders’”, which marks the first time I’ve ever been published at the venerable upstart CSS-Tricks. (I’m old, so I can call things both venerable and an upstart. You kids today!) In it, I explore ways to simulate the effect of stacking multiple element borders atop on another, including combining box shadows and outlines, borders and backgrounds, and even using border images, which have a much wider support base than you might have realized.
Many thanks to Chris Coyier for accepting the piece, and Geoff Graham for his editorial assistance. I hope you’ll find at least some part of it useful, or better still, interesting. Share and enjoy!
I’ve relied on a mouse for about a decade and a half. I don’t mean “relied on a mouse” in the generic sense, but rather in the sense that I’ve relied on one very specific and venerable mouse: a Logitech MX500.
I’ve had it for so long, I’d forgotten how long I’ve had it. I searched for information about its production dates and wouldn’t you know it, Wikipedia has an article devoted solely to Logitech products throughout history, because of course it does, and it lists (among other things) their dates of release. The MX500 was released in 2002, and superseded by the MX510 in 2004. I then remembered a photo I took of my eldest child when she was an infant, trying to chew on a computer mouse. I dug it out of my iPhoto library and yep, it’s my MX500. The picture is dated June 2004.
So I have photographic evidence that I’ve used this specific mouse for 15 years or more. The logo plate on top of the mouse has been worn half-smooth and half-paintless by the palm of my hand, much like the shiny-smooth areas worn into the subtle matte surface texture where the thumb and pinky finger grip the sides. The model and technical information printed on the underside has similarly worn away. It started out with four little oval glide nubs on the underside that held the bottom away from the desk surface; only one remains. Even though, as an optical mouse, it can be used on any surface, I eventually went back to soft mousepads, so as to limit further undercarriage damage.
Why have I been so devoted to this mouse? Well, it’s incredibly well engineered, for one — it’s put up with 15 years of daily use. It’s exactly the right shape for my hand, and it has multiple configurable inputs right where I expect them. There are arrow buttons just above my thumb which I use as forward/backward in browsers, buttons above and below the scroll wheel that I map to Page Up/Page Down, an extra button at almost the apex of the mouse’s back mapped to ⌥⇥ (Option-Tab), and the usual right/left mouse click buttons. Plus the scroll wheel is itself a push-down-to-click button.
Most of these features can be found on one mouse or another, but it’s rare to find them all in one mouse — and next to impossible to find them in a shape and size that feels comfortable to me. I’d occasionally looked at the secondary market, but even used, the MX500 can command three figures. I checked Amazon as I wrote this, and an unused MX500 was listing for two hundred fifty dollars. Unused copies of its successor, the MX510, were selling for even more.
Now, if you were into gaming in the first decade of the 2000s, you may have heard of or used the MX510’s successor, the MX518. Released in 2005, it was basically an MX500/MX510, but branded for gaming, with some optical-sensor upgrades for more tracking precision. The MX518 lasted until 2011, when it was superseded by a different model, which itself was superseded, which et cetera, et cereta, et cetera.
Which brings me to the point of all this. A few weeks ago, after several weeks of sporadic glitches, the scroll wheel on my MX500 almost completely stopped responding to being scrolled. Which maybe doesn’t sound like a big deal, but try going without your scroll wheel for a while. I was surprised to discover how much I relied on it. So, glumly, knowing the model was long out of production and incredibly expensive to buy, I went searching for equivalents.
And that’s when I discovered that Logitech had literally announced less than a week earlier that they were releasing an updated MX518, available for pre-order.
Friends, I have never pre-ordered anything so fast.
This past Thursday afternoon, it arrived. I got it set up and have been working with it since. And I have some impressions.
Physically, the MX518 Legendary (as Logitech has branded it) is 95% a match for my old MX500. It’s ever so slightly smaller, just enough that I can tell but not quite enough to be annoying, odd as that may seem. Otherwise, everything feels like it should. The buttons are crisp and clicky, and right where I expect them. And the scroll wheel… well, it works.
The coloration is different — the surface and buttons are all black, as opposed to the MX500’s black-and-silver two-tone styling. While I miss the two-tone a bit, there’s an upgrade: the smooth black top surface has subtle little sparkles embedded in the paint. Shiny!
On the other hand, configuring the mouse was a bit of an odyssey. First off, let me make clear that I have a weird setup, even for a grumpy old Mac user. I plug a circa-2000 Macally original iKey 104-key keyboard into my 2013 MacBook Pro. (Yes, you have sensed a trend here: when I find hardware I really like, I hang onto it like a rabid weasel. Ditto software.) The “extra” keys on the Macally like Page Up, Home, and so on don’t get recognized by a lot of current software. Even the Finder can’t read the keyboard’s function keys properly. I’ve restored their functionality with the entirely excellent BetterTouchTool, but it remains that the keyboard is just odd in its ancientness.
Anyway, I first opened System Preferences and then the Logitech Control Center pane. It couldn’t find the MX518 Legendary at all. So next I opened the (separate) Logitech Options pane, which drives the wireless mouse I use when I travel. It too was unable to find the MX518.
Some Bing-ing led me to a download for Logitech Gaming Software (hereafter LGS), which I installed. That could see the MX518 just fine. Once I stumbled my way into an understanding of LGS’s UI, I set about trying to configure the MX518’s buttons to do what I wanted.
And could not. In the list of predefined mouse actions that could be assigned to the buttons, precisely none of my desires were listed. No ⌘-arrow combos, no page up or down, not even ⌥⇥ to switch apps. I mean, I guess that’s to be expected: it’s sold as a gaming mouse. LGS has plenty of support for on-the-fly-dee-pee-eye switching and copy-paste and all that. Not so much for document editing and code browsing.
There is a way to assign keyboard combos to buttons, but again, the software could understand precisely none of the combos I wanted to record when I typed them on my Macally. So I went to the MacBook Pro’s built-in keyboard, where I was able to register ⌥⇥, ⌘→, and ⌘←. I could not, however much I tried, register Page Up or Page Down. I pressed Fn, which showed “Fn” in the LGS software, and then pressed the down arrow for Page Down, and as long as I held down both keys, it showed “Page Down”. But as soon as I let go of the down arrow, “Fn” was registered again. No Page Down for me.
Now, recall, this was happening on the laptop’s built-in keyboard. I can’t really blame this one on age of the external Macally. I really think this one might fall on LGS itself; while a 2013 MacBook is old, it’s not that old.
I thought I might be stuck, but I intuited a workaround: I opened the Keyboard Viewer app built into the Finder. With that, I could just click the virtual Page Up and Page Down keys, and LGS registered them without a hiccup. While I was in there, I used it to set the scroll wheel’s middle-button click to trigger Mission Control (F3).
The following key-repeat problem has been fixed and was not the fault of the MX518; see my comment for details on how I resolved it. The one letdown I have is that the buttons don’t appear to repeat keystrokes. So if I hold the button I’ve assigned to Page Down for example, I get exactly one page-down, and that’s it until I release and click the button again. On the MX500, holding down the button assigned to Page Down would just constantly page down until I let go. This was sometimes preferable to scrolling with the scroll wheel, especially for long documents I wanted to very quickly scan for a certain figure or other piece of the page. The same was true for all the buttons: hold it down, and the thing it was configured to do happened repeatedly until you let go.
The MX518 Legendary isn’t doing that. I don’t know if this is an inherent limitation of the mouse, its software, my configuration of it, the interaction of software and operating system, or something else entirely. It’s not an issue forty-nine times out of fifty, but that fiftieth time is annoying.
The other annoyance is one of possibly missed potential. The mouse software has, in keeping with its gaming focus, the ability to set up multiple profiles; that way, you can assign unique actions to the buttons on a per-application basis. I set up a couple of profiles to test it out, but LGS is completely opaque about how to make profiles switch automatically when you switch to an app. I’ll look for an answer online, but it’s annoying that the software promises per-app profiles, and then apparently fails to deliver on that promise.
So after all that, am I happy? Yes. It’s essentially my old mouse, except brand new. My heartfelt thanks to Logitech for bringing this workhorse out of retirement. I look forward to a decade or more with it.
So it’s been (checks watch) half a year since I last blogged, yeah, okay, been a while. I took a break, not that you would’ve been able to tell from the sporadic nature of updates before I did so, but a break I took nonetheless. Well, break’s over.
One of things I plan to do is fill in a post I missed writing at the beginning of December: the 25th anniversary of my working with the web. I’ll tell the story in that post, but suffice to say it involves a laptop, a printout of the HTML specification, Microsoft Word 5.1a, a snagged Usenet post, and Mystery Science Theater 3000. Keep circulating the tags!
Before that happens, I’ll be posting a review of the return of a very old, very faithful assistant. I also have an article coming on a site where I’ve never been published before, so that’s exciting — look for an announcement here as soon as it’s public. Stay tuned!
Firefox 62 ships today, bringing with it some real CSS goodness. For one: float shapes! Which means now, mainline Firefox users will see the text flow past the blender in “Handiwork” the same way Chrome users have for a long time now.
But an even bigger addition is support for variable fonts. The ability to have one font file that mathematically describes variants on the base face means that all kinds of fine-grained typography is possible with far less bandwidth overhead and a major reduction in page weight.
However: bear in mind that like Safari, but unlike Chrome, Firefox’s variable-font support is dependent on the operating system on which is runs. If you have Windows 10, or Max OS X 10.13, then you have variable font support in Firefox and Safari. Earlier versions of those operating systems don’t support variable fonts, and so Safari and Firefox don’t either. Chrome rolls its own variable-font support, so it can extend support backwards in the OS timeline.
(I don’t know how things stand in the Linux world. Hopefully someone can clear things up in the comments!)
I say this not to chastise Firefox (nor Safari), because I tend to think leaning on the OS for this sort of thing is reasonable. I feel the same way about form elements like <select> dropdowns, to be clear, which I know likely places me in the minority. The point here is to give you a heads-up: if you get reports that a font isn’t doing the variable thing you styled, but it’s working fine for you, keep “check their operating system version” on your list of diagnostic tests.
Back in 2015, I wrote about Firefox’s screenshot utility, which used to be a command in the GCLI. Well, the GCLI is gone now, but the coders at Mozilla have brought command-line screenshotting back with :screenshot, currently available in Firefox Nightly and Firefox Dev Edition. It’s available in the Web Console (⌥⌘K or Tools → Web Developer → Console).
Once you’re in the Web Console, you can type :sc and then hit Tab to autocomplete :screenshot. From there, everything is the same as I wrote in 2015, with the exception that the --imgur and --chrome options no longer exist. There are plans to add uploading to Firefox Screenshots as a replacement for the old Imgur option, but as of this writing, that’s still in the future.
So the list of :screenshot options as of late August 2018 is:
--clipboard
Copies the image to your OS clipboard for pasting into other programs. Prevents saving to a file unless you use the --file option to force file-writing.
--delay
The time in seconds to wait before taking the screenshot; handy if you want to pop open a menu or invoke a hover state for the screenshot. You can use any number, not just integers.
--dpr
The Device Pixel Ratio (DPR) of the captured image. Values above 1 yield “zoomed-in” images; values below 1 create “zoomed-out“ results. See the original article for more details.
--fullpage
Captures the entire page, not just the portion of the page visible in the browser’s viewport. For unusually long (or wide) pages, this can cause problems like crashing, not capturing all of the page, or just failing to capture anything at all.
--selector
Accepts a CSS selector and captures only that element and its descendants.
--file
When true, forces writing of the captured image to a file, even if --clipboard is also being used. Setting this to false doesn’t seem to have any effect.
--filename
Allows you to set a filename rather than accept the default. Explicitly saying --filename seems to be optional; I find that writing simply :screenshot test yields a file called test.png, without the need to write :screenshot --filename test. YFFMV.
I do have one warning: if you capture an image to a filename like test.png, and then you capture to that same filename, the new image will overwrite the old image. This can bite you if you’re using the up-arrow history scroll to capture images in quick succession, and forget to change the filename for each new capture. If you don’t supply a filename, then the file’s name uses the pattern of your OS screen capture naming; e.g., Screen Shot 2018-08-23 at 16.44.41.png on my machine.
I still use :screenshot to this day, and I’m very happy to see it restored to the browser — thank you, Mozillans! You’re the best.
In the middle of last month (July 2018), I found myself staring at a projector screen, waiting once again to see if Wikipedia would load. If I was lucky, the page started rendering 15-20 seconds after I sent the request. If not, it could be closer to 60 seconds, assuming the browser didn’t just time out on the connection. I saw a lot of “the server stopped responding” over the course of a few days.
It wasn’t just Wikipedia, either. CNN International had similar load times. So did Google’s main search page. Even this here site, with minimal assets to load, took a minimum of 10 seconds to start rendering. Usually longer.
In 2018? Yes. In rural Uganda, where I was improvising an introduction to web development for a class of vocational students, that’s the reality. They can have a computer lab full of Dell desktops running Windows or rows of Raspberry Pis running Ubuntu or whatever setup there is, but when satellites in geosynchronous earth orbit are your only source of internet, you wait. And wait. And wait.
I want to explain why — and far more importantly, how we’ve made that experience interminably worse and more expensive in the name of our comfort and security.
First, please consider the enormously constrained nature of satellite internet access. If you’re already familiar with this world, skip ahead a few paragraphs; but if not, permit me a brief description of the challenges.
For geosynchronous-satellite internet access, the speed of light become a factor in ping times: just having the signals propagate through a mixture of vacuum and atmosphere chews up approximately half a second of travel time over roughly 89,000 miles (~152,000km). If that all that distance were vacuum, your absolute floor for ping latency is about 506 milliseconds.
That’s just the time for the signals to make two round trips to geosynchronous orbit and back. In reality, there are the times to route the packets on either end, and the re-transmission time at the satellite itself.
But that’s not the real connection killer in most cases: packet loss is. After all, these packets are going to orbit and back. Lots of things along those long and lonely signal paths can cause the packets to get dropped. 50% packet loss is not uncommon; 80% is not unexpected.
So, you’re losing half your packets (or more), and the packets that aren’t lost have latency times around two-thirds of a second (or more). Each.
That’s reason enough to set up a local caching server. Another, even more pressing reason is that pretty much all commercial satellite connections come with data caps. Where I was, their cap was 50GB/month. Beyond that, they could either pay overages, or just not have data until the next month. So if you can locally cache URLs so that they only count against your data usage the first time they’re loaded, you do that. And someone had, for the school where I was teaching.
But there I stood anyway, hoping my requests to load simple web pages would bear fruit, and I could continue teaching basic web principles to a group of vocational students. Because Wikipedia wouldn’t cache. Google wouldn’t cache. Meyerweb wouldn’t cache. Almost nothing would cache.
Why?
HTTPS.
A local caching server, meant to speed up commonly-requested sites and reduce bandwidth usage, is a “man in the middle”. HTTPS, which by design prevents man-in-the-middle attacks, utterly breaks local caching servers. So I kept waiting and waiting for remote resources, eating into that month’s data cap with every request.
The drive to force every site on the web to HTTPS has pushed the web further away from the next billion users — not to mention a whole lot of the previous half-billion. I saw a piece that claimed, “Investing in HTTPS makes it faster, cheaper, and easier for everyone.” If you define “everyone” as people with gigabit fiber access, sure. Maybe it’s even true for most of those whose last mile is copper. But for people beyond the reach of glass and wire, every word of that claim was wrong.
If this is a surprise to you, you’re by no means alone. I hadn’t heard anything about it, so I asked a number of colleagues if they knew about the problem. Not only had they not, they all reacted the same way I did: this must not be an actual problem, or we’d have heard about it! But no.
Can we do anything? For users of up-to-date browsers, yes: service workers create a “good” man in the middle that sidesteps the HTTPS problem, so far as I understand. So if you’re serving content over HTTPS, creating a service worker should be one of your top priorities right now, even if it’s just to do straightforward local caching and nothing fancier. I haven’t gotten one up for meyerweb yet, but I will do so very soon.
That’s great for modern browsers, but not everyone has the option to be modern. Sometimes they’re constrained by old operating systems to run older browsers, ones with no service-worker support: a lab full of Windows XP machines limited to IE8, for example. Or on even older machines, running Windows 95 or other operating systems of that era. Those are most likely to be the very people who are in situations where they’re limited to satellite internet or other similarly slow services with unforgiving data caps. Even in the highly-wired world, you can still find older installs of operating systems and browsers: public libraries, to pick but one example. Securing the web literally made it less accessible to many, many people around the world.
Beyond deploying service workers and hoping those struggling to bridge the digital divide make it across, I don’t really have a solution here. I think HTTPS is probably a net positive overall, and I don’t know what we could have done better. All I know is that I saw, first-hand, the negative externality that was pushed onto people far, far away from our data centers and our thoughts.
It was twenty years ago today, under the wide-spreading boughs of a tree in the front yard of a house on Long Island, that Kat and I exchanged our wedding vows before a small crowd of friends and family. Immediately after, we all moved to the tent in the back yard to celebrate.
The twentieth anniversary is, traditionally, the china anniversary. Kat’s immediate reaction upon hearing this was that it makes total sense, since by 20 years you’ve probably broken most of your wedding china and need replacements. For us, though, the resonance is a little different, since our honeymoon was a trip to China. And therein hangs an origin story.
At some point in the late 1997, Kat and I were at a Meyer family gathering, probably Thanksgiving, at my paternal grandparents’ house in Cincinnati. As was my wont, I was perusing the stacks of National Geographics they had always lying around. Not like in a dentist’s office; no, these were always up to date. But there were always many of them, interleaved with many similarly contemporary Readers’ Digests.
I picked up one with a cover shot and title about China’s Three Gorges
, and started leafing through it, eventually reaching the cover story. It chronicled the incredible landscapes of the Three Gorges of the Yangtze River, soaring cliff faces and ancient villages. I was immediately captivated by the story and especially the photography. I decided that I wanted to see the Gorges before they were submerged by the Three Gorges Dam Project, which is the sort of snap decision I almost never make. Usually I take time to analyze an idea and game out scenarios before reaching a conclusion, but not this time. I was immediately certain. I was certain enough to say it out loud to other people, like Kat and my parents and, who knows, probably a bunch of my extended family.
Now, fast forward a bit. At the end of that same year, Kat and I were with my parents for Christmas. We went out to dinner at Mom’s favorite spot for her birthday (also Boxing Day) and my parents said they had presents for me and my sister. We each got an envelope.
Both of them contained checks for several thousand dollars, windfall of an inheritance distribution that Mom had insisted be passed on to us. In mine, with the check, were a number of brochures for tours of China.
I was speechless. Kat asked what it was a couple of times, a little bewildered by the look on my face.
And here I must take a side trip. Kat and I had been on a trip to California a few weeks prior, just the two of us. We spent a couple of nights at Ragged Point, a spot I’d stumbled over on a previous solo trip, back in the days when the rooms intentionally had no TVs or phones. The restaurant was booked by a large group, so we ate dinner alone on the open patio under a heat umbrella, looking at the stars and enjoying the fantastic food; the chef at the time was a genius. Music played softly through hidden speakers, and although we were literally sitting outside it felt as quiet and private as any candlelit back room.
“The Christmas Song”, generally better known as “Chestnuts Roasting on an Open Fire”, started playing. Kat, smiling, asked me if I would like to dance. So we stood and danced close together, slowly shuffling around the open space the way untrained dancers do, just us and the song and the stars.
Kat swears I drew breath and opened my mouth to ask her to marry me. Maybe she’s right. But I didn’t, then. Nor the next day. Nor on Christmas Day. Which caused Kat to start thinking that maybe it wasn’t going to happen at all. She was feeling disappointed and hurt by this, as you can probably imagine, but keeping it to herself because she wasn’t sure yet if she was right or wrong.
So: back to Mom’s birthday dinner in Mansfield, Ohio, and me sitting stunned by the check and the China brochures and this unexpected, unprecedented windfall.
“Eric, what is it?” Kat asked again, with some concern starting to color her words.
“We’re going to China!” I finally blurted out.
“No, you’re going to China,” she replied a little tartly.
“No, we’re going to China,” I repeated.
Because in that moment, right there, I knew that this trip I wanted to take, the things I wanted to see so badly before they were gone — I couldn’t imagine doing and seeing all that without Kat.
That’s when I knew, beyond any shadow of a doubt, that I wanted to marry her.
I didn’t propose that night either, because I had to explain this all to her in halting, still-new words and help her (and me!) understand what had happened. She got it, as I think I knew she would. We went shopping for rings just after the New Year. I formally proposed to her, shivering on an ice-crusted deck by the Chagrin Falls, on her birthday in March.
And on July 19th, 1998, we stood underneath the spreading boughs of the tree in the front yard of her childhood home, and exchanged our wedding vows. A short time later, in a backyard tent in the heat of a mid-July afternoon on Long Island, we stood on the compact dance floor and danced to “The Christmas Song”, baffling half the attendees and bemusing the other half.
The next very day, we flew to China, and saw so much together over the next seventeen days: the Three Gorges, yes, but much more. Suzhou, Dazu, and Guilin stand out in particular for being a little more remote and not so overrun by tourists, the kinds of spots we always find inherently more interesting than large cities and glitzed-up, polished destinations. We still want to go back to Guilin some day.
In the two decades since we vowed to love and honor and respect and amuse each other, we’ve had many adventures together. Some were incredible, some were stressful, and some I would have spared us both. Picking out a card was difficult, with so many of them written as if 20 years together could never be anything but an unbroken stretch of bliss and good fortune. We’ve been through too much to respond well to such bromides; we’ve had fortune great and terrible, difficulty and ease, endless joy and boundless grief.
Every one of those days and weeks and months and years, we’ve supported and shared with each other. Kat’s been so strong, and so selfless, and I’ve tried to be the same for her. Neither of us did so perfectly, but we always tried — and we always understood when the other had to nurse a weakness, or look inward for a while. We have always been honest with each other, and accepted each other. That, more than anything, is what’s allowed us to travel together these two decades and still love each other.
I couldn’t have asked for a better partner in life and death than Kat, and I hope she’s even half as proud of and grateful for me as I am for her.