Posts in the Tech Category

“Stacked ‘Borders’” Published at CSS-Tricks

Published 5 years, 8 months past

I toyed with the idea of nesting elements with borders and some negative margins to pull one border on top of another, or nesting a border inside an outline and then using negative margins to keep from throwing off the layout. But none of that felt satisfying.

It turns out there are a number of tricks to create the effect of stacking one border atop another by combining a border with some other CSS effects, or even without actually requiring the use of any borders at all. Let’s explore, shall we?

That’s from the introduction to my article “Stacked ‘Borders’”, which marks the first time I’ve ever been published at the venerable upstart CSS-Tricks.  (I’m old, so I can call things both venerable and an upstart.  You kids today!)  In it, I explore ways to simulate the effect of stacking multiple element borders atop on another, including combining box shadows and outlines, borders and backgrounds, and even using border images, which have a much wider support base than you might have realized.

And yes, as per my usual, the images in the piece are all double-dpi :screenshot captures directly from Firefox.

Many thanks to Chris Coyier for accepting the piece, and Geoff Graham for his editorial assistance.  I hope you’ll find at least some part of it useful, or better still, interesting.  Share and enjoy!


Legend of the Stalwart Mouse: Return of the MX518

Published 5 years, 8 months past

I’ve relied on a mouse for about a decade and a half.  I don’t mean “relied on a mouse” in the generic sense, but rather in the sense that I’ve relied on one very specific and venerable mouse: a Logitech MX500.

I’ve had it for so long, I’d forgotten how long I’ve had it.  I searched for information about its production dates and wouldn’t you know it, Wikipedia has an article devoted solely to Logitech products throughout history, because of course it does, and it lists (among other things) their dates of release.  The MX500 was released in 2002, and superseded by the MX510 in 2004.  I then remembered a photo I took of my eldest child when she was an infant, trying to chew on a computer mouse.  I dug it out of my iPhoto library and yep, it’s my MX500.  The picture is dated June 2004.

So I have photographic evidence that I’ve used this specific mouse for 15 years or more.  The logo plate on top of the mouse has been worn half-smooth and half-paintless by the palm of my hand, much like the shiny-smooth areas worn into the subtle matte surface texture where the thumb and pinky finger grip the sides.  The model and technical information printed on the underside has similarly worn away.  It started out with four little oval glide nubs on the underside that held the bottom away from the desk surface; only one remains.  Even though, as an optical mouse, it can be used on any surface, I eventually went back to soft mousepads, so as to limit further undercarriage damage.

The old gray mare — er, mouse — proving that it’s not the years, it’s the mileage

Why have I been so devoted to this mouse?  Well, it’s incredibly well engineered, for one — it’s put up with 15 years of daily use.  It’s exactly the right shape for my hand, and it has multiple configurable inputs right where I expect them.  There are arrow buttons just above my thumb which I use as forward/backward in browsers, buttons above and below the scroll wheel that I map to Page Up/Page Down, an extra button at almost the apex of the mouse’s back mapped to ⌥⇥ (Option-Tab), and the usual right/left mouse click buttons.  Plus the scroll wheel is itself a push-down-to-click button.

Most of these features can be found on one mouse or another, but it’s rare to find them all in one mouse — and next to impossible to find them in a shape and size that feels comfortable to me.  I’d occasionally looked at the secondary market, but even used, the MX500 can command three figures.  I checked Amazon as I wrote this, and an unused MX500 was listing for two hundred fifty dollars.  Unused copies of its successor, the MX510, were selling for even more.

Now, if you were into gaming in the first decade of the 2000s, you may have heard of or used the MX510’s successor, the MX518.  Released in 2005, it was basically an MX500/MX510, but branded for gaming, with some optical-sensor upgrades for more tracking precision.  The MX518 lasted until 2011, when it was superseded by a different model, which itself was superseded, which et cetera, et cereta, et cetera.

Which brings me to the point of all this.  A few weeks ago, after several weeks of sporadic glitches, the scroll wheel on my MX500 almost completely stopped responding to being scrolled.  Which maybe doesn’t sound like a big deal, but try going without your scroll wheel for a while.  I was surprised to discover how much I relied on it.  So, glumly, knowing the model was long out of production and incredibly expensive to buy, I went searching for equivalents.

And that’s when I discovered that Logitech had literally announced less than a week earlier that they were releasing an updated MX518, available for pre-order.

Friends, I have never pre-ordered anything so fast.

This past Thursday afternoon, it arrived.  I got it set up and have been working with it since.  And I have some impressions.

Physically, the MX518 Legendary (as Logitech has branded it) is 95% a match for my old MX500.  It’s ever so slightly smaller, just enough that I can tell but not quite enough to be annoying, odd as that may seem.  Otherwise, everything feels like it should.  The buttons are crisp and clicky, and right where I expect them.  And the scroll wheel… well, it works.

The coloration is different — the surface and buttons are all black, as opposed to the MX500’s black-and-silver two-tone styling.  While I miss the two-tone a bit, there’s an upgrade: the smooth black top surface has subtle little sparkles embedded in the paint.  Shiny!

The changing of the guard

On the other hand, configuring the mouse was a bit of an odyssey.  First off, let me make clear that I have a weird setup, even for a grumpy old Mac user.  I plug a circa-2000 Macally original iKey 104-key keyboard into my 2013 MacBook Pro.  (Yes, you have sensed a trend here: when I find hardware I really like, I hang onto it like a rabid weasel.  Ditto software.)  The “extra” keys on the Macally like Page Up, Home, and so on don’t get recognized by a lot of current software.  Even the Finder can’t read the keyboard’s function keys properly.  I’ve restored their functionality with the entirely excellent BetterTouchTool, but it remains that the keyboard is just odd in its ancientness.

Anyway, I first opened System Preferences and then the Logitech Control Center pane.  It couldn’t find the MX518 Legendary at all.  So next I opened the (separate) Logitech Options pane, which drives the wireless mouse I use when I travel.  It too was unable to find the MX518.

Where my paging functions at?

Some Bing-ing led me to a download for Logitech Gaming Software (hereafter LGS), which I installed.  That could see the MX518 just fine.  Once I stumbled my way into an understanding of LGS’s UI, I set about trying to configure the MX518’s buttons to do what I wanted.

And could not.  In the list of predefined mouse actions that could be assigned to the buttons, precisely none of my desires were listed.  No ⌘-arrow combos, no page up or down, not even ⌥⇥ to switch apps.  I mean, I guess that’s to be expected: it’s sold as a gaming mouse.  LGS has plenty of support for on-the-fly-dee-pee-eye switching and copy-paste and all that.  Not so much for document editing and code browsing.

There is a way to assign keyboard combos to buttons, but again, the software could understand precisely none of the combos I wanted to record when I typed them on my Macally.  So I went to the MacBook Pro’s built-in keyboard, where I was able to register ⌥⇥, ⌘→, and ⌘←.  I could not, however much I tried, register Page Up or Page Down.  I pressed Fn, which showed “Fn” in the LGS software, and then pressed the down arrow for Page Down, and as long as I held down both keys, it showed “Page Down”.  But as soon as I let go of the down arrow, “Fn” was registered again.  No Page Down for me.

Now, recall, this was happening on the laptop’s built-in keyboard.  I can’t really blame this one on age of the external Macally.  I really think this one might fall on LGS itself; while a 2013 MacBook is old, it’s not that old.

I thought I might be stuck, but I intuited a workaround: I opened the Keyboard Viewer app built into the Finder.  With that, I could just click the virtual Page Up and Page Down keys, and LGS registered them without a hiccup.  While I was in there, I used it to set the scroll wheel’s middle-button click to trigger Mission Control (F3).

The following key-repeat problem has been fixed and was not the fault of the MX518; see my comment for details on how I resolved it. The one letdown I have is that the buttons don’t appear to repeat keystrokes.  So if I hold the button I’ve assigned to Page Down for example, I get exactly one page-down, and that’s it until I release and click the button again.  On the MX500, holding down the button assigned to Page Down would just constantly page down until I let go.  This was sometimes preferable to scrolling with the scroll wheel, especially for long documents I wanted to very quickly scan for a certain figure or other piece of the page.  The same was true for all the buttons: hold it down, and the thing it was configured to do happened repeatedly until you let go.

The MX518 Legendary isn’t doing that.  I don’t know if this is an inherent limitation of the mouse, its software, my configuration of it, the interaction of software and operating system, or something else entirely.  It’s not an issue forty-nine times out of fifty, but that fiftieth time is annoying.

The other annoyance is one of possibly missed potential.  The mouse software has, in keeping with its gaming focus, the ability to set up multiple profiles; that way, you can assign unique actions to the buttons on a per-application basis.  I set up a couple of profiles to test it out, but LGS is completely opaque about how to make profiles switch automatically when you switch to an app.  I’ll look for an answer online, but it’s annoying that the software promises per-app profiles, and then apparently fails to deliver on that promise.

So after all that, am I happy?  Yes.  It’s essentially my old mouse, except brand new.  My heartfelt thanks to Logitech for bringing this workhorse out of retirement.  I look forward to a decade or more with it.


Variable Font Support

Published 6 years, 2 months past

Firefox 62 ships today, bringing with it some real CSS goodness.  For one: float shapes!  Which means now, mainline Firefox users will see the text flow past the blender in “Handiwork” the same way Chrome users have for a long time now.

But an even bigger addition is support for variable fonts.  The ability to have one font file that mathematically describes variants on the base face means that all kinds of fine-grained typography is possible with far less bandwidth overhead and a major reduction in page weight.

However: bear in mind that like Safari, but unlike Chrome, Firefox’s variable-font support is dependent on the operating system on which is runs.  If you have Windows 10, or Max OS X 10.13, then you have variable font support in Firefox and Safari.  Earlier versions of those operating systems don’t support variable fonts, and so Safari and Firefox don’t either.  Chrome rolls its own variable-font support, so it can extend support backwards in the OS timeline.

(I don’t know how things stand in the Linux world.  Hopefully someone can clear things up in the comments!)

I say this not to chastise Firefox (nor Safari), because I tend to think leaning on the OS for this sort of thing is reasonable.  I feel the same way about form elements like <select> dropdowns, to be clear, which I know likely places me in the minority.  The point here is to give you a heads-up: if you get reports that a font isn’t doing the variable thing you styled, but it’s working fine for you, keep “check their operating system version” on your list of diagnostic tests.


Firefox’s :screenshot command

Published 6 years, 3 months past

Back in 2015, I wrote about Firefox’s screenshot utility, which used to be a command in the GCLI.  Well, the GCLI is gone now, but the coders at Mozilla have brought command-line screenshotting back with :screenshot, currently available in Firefox Nightly and Firefox Dev Edition.  It’s available in the Web Console (⌥⌘K or Tools → Web Developer → Console).

An image I captured by typing :screenshot --dpr 0.5 in the Web Console

Once you’re in the Web Console, you can type :sc and then hit Tab to autocomplete :screenshot. From there, everything is the same as I wrote in 2015, with the exception that the --imgur and --chrome options no longer exist.  There are plans to add uploading to Firefox Screenshots as a replacement for the old Imgur option, but as of this writing, that’s still in the future.

So the list of :screenshot options as of late August 2018 is:

--clipboard
Copies the image to your OS clipboard for pasting into other programs.  Prevents saving to a file unless you use the --file option to force file-writing.
--delay
The time in seconds to wait before taking the screenshot; handy if you want to pop open a menu or invoke a hover state for the screenshot.  You can use any number, not just integers.
--dpr
The Device Pixel Ratio (DPR) of the captured image.  Values above 1 yield “zoomed-in” images; values below 1 create “zoomed-out“ results.  See the original article for more details.
--fullpage
Captures the entire page, not just the portion of the page visible in the browser’s viewport.  For unusually long (or wide) pages, this can cause problems like crashing, not capturing all of the page, or just failing to capture anything at all.
--selector
Accepts a CSS selector and captures only that element and its descendants.
--file
When true, forces writing of the captured image to a file, even if --clipboard is also being used.  Setting this to false doesn’t seem to have any effect.
--filename
Allows you to set a filename rather than accept the default.  Explicitly saying --filename seems to be optional; I find that writing simply :screenshot test yields a file called test.png, without the need to write :screenshot --filename test. YFFMV.

I do have one warning: if you capture an image to a filename like test.png, and then you capture to that same filename, the new image will overwrite the old image.  This can bite you if you’re using the up-arrow history scroll to capture images in quick succession, and forget to change the filename for each new capture.  If you don’t supply a filename, then the file’s name uses the pattern of your OS screen capture naming; e.g., Screen Shot 2018-08-23 at 16.44.41.png on my machine.

I still use :screenshot to this day, and I’m very happy to see it restored to the browser — thank you, Mozillans!  You’re the best.


Securing Web Sites Made Them Less Accessible

Published 6 years, 3 months past

In the middle of last month (July 2018), I found myself staring at a projector screen, waiting once again to see if Wikipedia would load.  If I was lucky, the page started rendering 15-20 seconds after I sent the request.  If not, it could be closer to 60 seconds, assuming the browser didn’t just time out on the connection.  I saw a lot of “the server stopped responding” over the course of a few days.

It wasn’t just Wikipedia, either.  CNN International had similar load times.  So did Google’s main search page.  Even this here site, with minimal assets to load, took a minimum of 10 seconds to start rendering.  Usually longer.

In 2018?  Yes.  In rural Uganda, where I was improvising an introduction to web development for a class of vocational students, that’s the reality.  They can have a computer lab full of Dell desktops running Windows or rows of Raspberry Pis running Ubuntu or whatever setup there is, but when satellites in geosynchronous earth orbit are your only source of internet, you wait.  And wait.  And wait.

I want to explain why — and far more importantly, how we’ve made that experience interminably worse and more expensive in the name of our comfort and security.

First, please consider the enormously constrained nature of satellite internet access.  If you’re already familiar with this world, skip ahead a few paragraphs; but if not, permit me a brief description of the challenges.

For geosynchronous-satellite internet access, the speed of light become a factor in ping times: just having the signals propagate through a mixture of vacuum and atmosphere chews up approximately half a second of travel time over roughly 89,000 miles (~152,000km).  If that all that distance were vacuum, your absolute floor for ping latency is about 506 milliseconds.

That’s just the time for the signals to make two round trips to geosynchronous orbit and back.  In reality, there are the times to route the packets on either end, and the re-transmission time at the satellite itself.

But that’s not the real connection killer in most cases: packet loss is.  After all, these packets are going to orbit and back.  Lots of things along those long and lonely signal paths can cause the packets to get dropped.  50% packet loss is not uncommon; 80% is not unexpected.

So, you’re losing half your packets (or more), and the packets that aren’t lost have latency times around two-thirds of a second (or more).  Each.

That’s reason enough to set up a local caching server.  Another, even more pressing reason is that pretty much all commercial satellite connections come with data caps.  Where I was, their cap was 50GB/month.  Beyond that, they could either pay overages, or just not have data until the next month.  So if you can locally cache URLs so that they only count against your data usage the first time they’re loaded, you do that.  And someone had, for the school where I was teaching.

But there I stood anyway, hoping my requests to load simple web pages would bear fruit, and I could continue teaching basic web principles to a group of vocational students.  Because Wikipedia wouldn’t cache.  Google wouldn’t cache.  Meyerweb wouldn’t cache.  Almost nothing would cache.

Why?

HTTPS.

A local caching server, meant to speed up commonly-requested sites and reduce bandwidth usage, is a “man in the middle”.  HTTPS, which by design prevents man-in-the-middle attacks, utterly breaks local caching servers.  So I kept waiting and waiting for remote resources, eating into that month’s data cap with every request.

The drive to force every site on the web to HTTPS has pushed the web further away from the next billion users — not to mention a whole lot of the previous half-billion.  I saw a piece that claimed, “Investing in HTTPS makes it faster, cheaper, and easier for everyone.”  If you define “everyone” as people with gigabit fiber access, sure.  Maybe it’s even true for most of those whose last mile is copper.  But for people beyond the reach of glass and wire, every word of that claim was wrong.

If this is a surprise to you, you’re by no means alone.  I hadn’t heard anything about it, so I asked a number of colleagues if they knew about the problem.  Not only had they not, they all reacted the same way I did: this must not be an actual problem, or we’d have heard about it!  But no.

Can we do anything?  For users of up-to-date browsers, yes: service workers create a “good” man in the middle that sidesteps the HTTPS problem, so far as I understand.  So if you’re serving content over HTTPS, creating a service worker should be one of your top priorities right now, even if it’s just to do straightforward local caching and nothing fancier.  I haven’t gotten one up for meyerweb yet, but I will do so very soon.

That’s great for modern browsers, but not everyone has the option to be modern.  Sometimes they’re constrained by old operating systems to run older browsers, ones with no service-worker support: a lab full of Windows XP machines limited to IE8, for example.  Or on even older machines, running Windows 95 or other operating systems of that era.  Those are most likely to be the very people who are in situations where they’re limited to satellite internet or other similarly slow services with unforgiving data caps.  Even in the highly-wired world, you can still find older installs of operating systems and browsers: public libraries, to pick but one example.  Securing the web literally made it less accessible to many, many people around the world.

Beyond deploying service workers and hoping those struggling to bridge the digital divide make it across, I don’t really have a solution here.  I think HTTPS is probably a net positive overall, and I don’t know what we could have done better.  All I know is that I saw, first-hand, the negative externality that was pushed onto people far, far away from our data centers and our thoughts.

My thanks to Tim Kadlec and Ethan Marcotte for their feedback and insight while I was drafting this post, and to Lara Hogan and Aaron Gustafson for their early assistance wth my research.


What is the CSS ‘ch’ Unit?

Published 6 years, 4 months past

I keep seeing authors and speakers refer to the ch unit as meaning “character width”.  This leads to claims that you can “make your content column 60 characters wide for maximum readability” or “size images to be a certain number of characters!”

Well… yes and no.  Specifically, yes if you’re using fixed-width fonts.  Otherwise, mostly no.

This is because, despite what the letters ch might imply, ch units are not “character” units.  They are defined as:

Equal to the used advance measure of the “0” (ZERO, U+0030) glyph found in the font used to render it. (The advance measure of a glyph is its advance width or height, whichever is in the inline axis of the element.)

So however wide the “0” character is in a given typeface, that’s the measure of one ch.  In monospace (fixed-width) fonts, where all characters are the same width, 1ch equals one character.  In proportional (variable-width) fonts, any given character could be wider or narrower than the “0” character.

To illustrate this, here are a few example elements which are set to be exactly 20ch wide, and also contain exactly 20 characters.

Courier

Look, 20 characters. abcdefghijklmnopqrst 12345678901234567890 iiiiiiiiiiiiiiiiiiii mmmmmmmmmmmmmmmmmmmm

Helvetica

Look, 20 characters. abcdefghijklmnopqrst 12345678901234567890 iiiiiiiiiiiiiiiiiiii mmmmmmmmmmmmmmmmmmmm

Georgia

Look, 20 characters. abcdefghijklmnopqrst 12345678901234567890 iiiiiiiiiiiiiiiiiiii mmmmmmmmmmmmmmmmmmmm

It’s probably no surprise that in Courier, all the elements are the exact same width as their text contents.  In Helvetica, by contrast, this is mostly not the case except for numbers, which appear to be fixed-width.  In Georgia, by contrast, none of the text contents fit the boxes, not even the numbers.

What I’ve found through random experimentation is that in proportional typefaces, 1ch is usually wider than the average character width, usually by around 20-30%.  But there are at least a few typefaces where the zero symbol is skinny with respect to the other letterforms; in such a case, 1ch is narrower than the average character width.  Trajan Pro is one example I found where the zero was a bit narrower than the average, but I’m sure there are others. Conversely, I’m sure there are typefaces with Big Fat Zeroes, in which case the difference between ch and the average character width could be around 50%.

So in general, if you want an 80-character column width and you’re going to use ch to size it, aim for about 60ch, unless you’re specifically working with a typeface that has a skinny zero.  And if you’re working with multiple typefaces, say one for headlines and another for body copy, be careful about setting ch measures and thinking they’ll be equivalent between the two fonts.  The odds are very, very high they won’t be.

It would be interesting to see the Working Group take up the idea of average character width as a unit of measure — say, 1acw or possibly just 1cw — which actually uses all the letterforms in a typeface to calculate an average value.  That would get a lot closer to “make your columns 60 characters wide!” in a lot more cases than ch does now.


Specificity in :not(), :has(), and :matches()

Published 6 years, 5 months past

A few years back, I wrote a short post on specificity, element proximity, and the negation pseudo-class.  Everything in it is still accurate and relevant, but I have some updates to share.

First off, I’d like to clarify something that some people may have found confusing.  In that post, I said:

But it turns out that the negation pseudo-class isn’t counted as a pseudo-class.

That might leave some people with the idea that the entire negation portion of the selector is ignored for the purposes of specificity, especially if you don’t speak spec.

So consider the following:

div:not(.one) p

In order from left to right, that’s an element selector (div), a negation pesudo-class (:not) a class selector (.one), and another element selection (p).  Two element selectors and one class selector are counted towards the specificity, yielding a total of 0,0,1,2.  That’s the same specificity as div.one p, though the two selectors select very different things.

In Ye Olden Days, that was easy enough to work out, because :not() could only ever contain a simple selector.  Things are looking to get more complicated, however — :not() is set to accept grouped selectors.  So we will at some point be able to say:

div:not(.one, .two, #navbar) p

So any p element that is not descended from a div that has a class containing either one or two (or both), or that has an id of navbar, will be selected.

But how do we calculate the specificity of that whole selector?  Just add up all the pieces?  No.  The Working Group recently decided that the specificity contributed from inside a :not() will be equal to the single selector with the highest specificity.  So given div:not(.one, .two, #navbar) p, the #navbar will contribute 0,1,0,0 to the overall specificity of the selector, yielding a total of 0,1,0,2.  The specificities of .one and .two are ignored.

This same approach will be taken with the :has() and :matches() pseudo-classes.  Thus we get the following:


:matches(nav, header, footer#pageend) a[href] {color: silver;}  /* 0,1,1,2 */
article:has(a.external, a img)  /* 0,0,1,2 */
input:not([type="radio"], [type="checkbox"])  /* 0,0,1,1 */

In the first instance, the bits that are added together are footer#pageend and a[href], so that’s one ID, one attribute, and two elements.  In the second, it’s article and a.external for one class and two elements.  And last, we add up input and either of the [type=""] attribute selectors, since their specificities are equal, which means we add up one attribute and one element.

There is still, so far as I’m aware, no concept of DOM-tree proximity in CSS.  I would still continue to wager that will remain true, though I’d put a fair bit less money down now than I would have six years ago.


Displaying CSS Breakpoint Information with Generated Content

Published 6 years, 9 months past

In the course of experimenting with an example design for my talks at An Event Apart this year, I came up with a way to keep track of which breakpoint was in force as I tested the design’s responsiveness.  I searched the web to see if anyone else had written about this and didn’t come up with any results, so I’ll document it here.  And probably also in the talks.

What I found was that, since I was setting breakpoints in ems instead of pixels, the responsive testing view in browsers didn’t really help, because I can’t maintain realtime mapping in my head from the current pixel value to however many rems it equals.  Since I don’t think the browser has a simple display of that information, I decided I’d do it myself.

It starts with some generated content:

body::before {content: "default";
   position: fixed; top: 1px; right: 1px; z-index: 100; padding: 1ch;
   background: rgba(0,0,0,0.67); color: rgba(255,255,255,0.75);
   font: bold 0.85rem Lucida Grande, sans-serif;}

You can of course change these to some other placement and appearance.  You can also attach these styles to the html element, or your page wrapper if you have one, or honestly even the footer of your document — since the position is fixed, it’ll be viewport-relative no matter where it originates.  The real point here is that we’re generating a bit of text we can change at each breakpoint, like so:

@media (max-width: 38em) {
   body::before {content: "<38em";}
   /* the rest of the breakpoint styles here */
}
@media (max-width: 50em) {
   body::before {content: "<50em";}
   /* the rest of the breakpoint styles here */
}
@media (min-width: 80em) {
   body::before {content: ">80em";}
   /* the rest of the breakpoint styles here */
}

The labels can be any string you want, so you can use “Narrow”, “Wide”, and so on just as easily as showing the measure in play, as I did.

The downside for me is that we automatically can’t make the labels cumulative in native CSS.  That means the order the @media blocks appear will determine which label is shown, even if multiple blocks are being applied.  As an example, given the styles above, at a width of 25em, the label shown will be <50em even though both the 38em and 50em blocks apply.

There are ways around this, like switching the order of the max-width blocks so the 38em block comes after the 50em block.  Or we could play specificity games:

@media (max-width: 38em) {
   html body::before {content: "<38em";}
   /* the rest of the breakpoint styles here */
}
@media (max-width: 50em) {
   body::before {content: "<50em";}
   /* the rest of the breakpoint styles here */
}

That’s not a solution that scales, sadly.  Probably better to sort the max-width media blocks in descending order, if you think you might end up with several.

The upside is that it’s easy to find and remove these lines once the development phase moves to QA.  Even better, before that point, you get a fully customizable in-viewport indication of where you are in the breakpoint stack as you look at the work in progress.  It’s pretty trivial to take this further by also changing the background color of the little box.  Maybe use a green for all the block above the “standard” set, and a red for all those below it.  Or toss in little background image icons of a phone or a desktop, if you have some handy.

So that’s the quick-and-dirty little responsive development hack I came up with this morning.  I hope it’s useful to some of you out there — and, if so, by all means share and enjoy!


Addendum: Emil Björklund proposes a variant approach that uses CSS Custom Properties (aka CSS variables) to implement this technique.


Browse the Archive

Earlier Entries

Later Entries