What happened was, I was preparing to roll out new designs for the News section and eventpages of An Event Apart, and I had each rollout in its own branch. Somewhere in the process of bringing both into the master branch, I managed to create a merge conflict that rapidly led to more and more conflicts. I very nearly had to take off and nuke the entire site from orbit, just to start over. (A couple of branches, including dev, did have to get erased and re-pulled.)
Part of what made it worse was that at one point, I accidentally committed a quick edit to master, because I’d forgotten to check out the branch I was trying to edit, and my attempts to undo that mistake just compounded whatever other mistakes already existed. Once all the dust settled and things were back into good shape, I said to myself, “Self, I bet there’s a way to prevent commits to the master branch, because git is second only to emacs in the number of things you can do to/with it.” So I went looking, and yes, there is a way: add the following to your .git/hooks/pre-commit file.
#!/bin/sh
branch="$(git rev-parse --abbrev-ref HEAD)"
if [ "$branch" = "master" ]; then
echo "You can't commit directly to master branch"
exit 1
fi
I got that from this StackOverflow answer, and it was perfect for me, since I use the bash shell. So I created the pre-commit file, made a trivial README.md edit, and tried to commit to master. That’s when OS X Mojave’s Terminal spit back:
fatal: cannot exec '.git/hooks/pre-commit': Operation not permitted
Huh. I mean, it prevented me from committing to master, but not in a useful way. Once I verified that it happened in all branches, not just master, I knew there was trouble.
I checked permissions and all the rest, but I was still getting the error. If I went into .git/hooks and ran the script from the command line by ./pre-commit, I got a slightly different error:
-bash: ./pre-commit: /bin/bash: bad interpreter: Operation not permitted
So I submitted my own StackOverflow question, detailing what I’d done and the file and directory permissions and all the rest. I was stunned to find out the answer was that Mojave itself was blocking things, through its System Integrity Protection feature. Why did this simple file trigger SIP? I don’t know.
The fix, shared by both Jeff and Rich, was to go into .git/hooks and then type the following to check for SIP status:
xattr -l pre-commit
It showed a com.apple.quarantine value, so I then typed:
xattr -d com.apple.quarantine pre-commit
And that was it! Now if I try to commit a change to the master branch, the commit is rejected and I get a warning message. At that point, I can git stash the changes, check out the proper branch (or a new one), and then git stash pop to bring the changes into that branch, where I can commit them and then merge the changes in properly.
I may modify the script to reject commits to the dev branch as well, but I’m holding off on that for now, since the dev branch is often where merge conflicts are worked out before going to master. Either way, at least I’ll be less likely to accidentally foul up master when I’m hip-deep in other problems.
I’ve relied on a mouse for about a decade and a half. I don’t mean “relied on a mouse” in the generic sense, but rather in the sense that I’ve relied on one very specific and venerable mouse: a Logitech MX500.
I’ve had it for so long, I’d forgotten how long I’ve had it. I searched for information about its production dates and wouldn’t you know it, Wikipedia has an article devoted solely to Logitech products throughout history, because of course it does, and it lists (among other things) their dates of release. The MX500 was released in 2002, and superseded by the MX510 in 2004. I then remembered a photo I took of my eldest child when she was an infant, trying to chew on a computer mouse. I dug it out of my iPhoto library and yep, it’s my MX500. The picture is dated June 2004.
So I have photographic evidence that I’ve used this specific mouse for 15 years or more. The logo plate on top of the mouse has been worn half-smooth and half-paintless by the palm of my hand, much like the shiny-smooth areas worn into the subtle matte surface texture where the thumb and pinky finger grip the sides. The model and technical information printed on the underside has similarly worn away. It started out with four little oval glide nubs on the underside that held the bottom away from the desk surface; only one remains. Even though, as an optical mouse, it can be used on any surface, I eventually went back to soft mousepads, so as to limit further undercarriage damage.
Why have I been so devoted to this mouse? Well, it’s incredibly well engineered, for one — it’s put up with 15 years of daily use. It’s exactly the right shape for my hand, and it has multiple configurable inputs right where I expect them. There are arrow buttons just above my thumb which I use as forward/backward in browsers, buttons above and below the scroll wheel that I map to Page Up/Page Down, an extra button at almost the apex of the mouse’s back mapped to ⌥⇥ (Option-Tab), and the usual right/left mouse click buttons. Plus the scroll wheel is itself a push-down-to-click button.
Most of these features can be found on one mouse or another, but it’s rare to find them all in one mouse — and next to impossible to find them in a shape and size that feels comfortable to me. I’d occasionally looked at the secondary market, but even used, the MX500 can command three figures. I checked Amazon as I wrote this, and an unused MX500 was listing for two hundred fifty dollars. Unused copies of its successor, the MX510, were selling for even more.
Now, if you were into gaming in the first decade of the 2000s, you may have heard of or used the MX510’s successor, the MX518. Released in 2005, it was basically an MX500/MX510, but branded for gaming, with some optical-sensor upgrades for more tracking precision. The MX518 lasted until 2011, when it was superseded by a different model, which itself was superseded, which et cetera, et cereta, et cetera.
Which brings me to the point of all this. A few weeks ago, after several weeks of sporadic glitches, the scroll wheel on my MX500 almost completely stopped responding to being scrolled. Which maybe doesn’t sound like a big deal, but try going without your scroll wheel for a while. I was surprised to discover how much I relied on it. So, glumly, knowing the model was long out of production and incredibly expensive to buy, I went searching for equivalents.
And that’s when I discovered that Logitech had literally announced less than a week earlier that they were releasing an updated MX518, available for pre-order.
Friends, I have never pre-ordered anything so fast.
This past Thursday afternoon, it arrived. I got it set up and have been working with it since. And I have some impressions.
Physically, the MX518 Legendary (as Logitech has branded it) is 95% a match for my old MX500. It’s ever so slightly smaller, just enough that I can tell but not quite enough to be annoying, odd as that may seem. Otherwise, everything feels like it should. The buttons are crisp and clicky, and right where I expect them. And the scroll wheel… well, it works.
The coloration is different — the surface and buttons are all black, as opposed to the MX500’s black-and-silver two-tone styling. While I miss the two-tone a bit, there’s an upgrade: the smooth black top surface has subtle little sparkles embedded in the paint. Shiny!
On the other hand, configuring the mouse was a bit of an odyssey. First off, let me make clear that I have a weird setup, even for a grumpy old Mac user. I plug a circa-2000 Macally original iKey 104-key keyboard into my 2013 MacBook Pro. (Yes, you have sensed a trend here: when I find hardware I really like, I hang onto it like a rabid weasel. Ditto software.) The “extra” keys on the Macally like Page Up, Home, and so on don’t get recognized by a lot of current software. Even the Finder can’t read the keyboard’s function keys properly. I’ve restored their functionality with the entirely excellent BetterTouchTool, but it remains that the keyboard is just odd in its ancientness.
Anyway, I first opened System Preferences and then the Logitech Control Center pane. It couldn’t find the MX518 Legendary at all. So next I opened the (separate) Logitech Options pane, which drives the wireless mouse I use when I travel. It too was unable to find the MX518.
Some Bing-ing led me to a download for Logitech Gaming Software (hereafter LGS), which I installed. That could see the MX518 just fine. Once I stumbled my way into an understanding of LGS’s UI, I set about trying to configure the MX518’s buttons to do what I wanted.
And could not. In the list of predefined mouse actions that could be assigned to the buttons, precisely none of my desires were listed. No ⌘-arrow combos, no page up or down, not even ⌥⇥ to switch apps. I mean, I guess that’s to be expected: it’s sold as a gaming mouse. LGS has plenty of support for on-the-fly-dee-pee-eye switching and copy-paste and all that. Not so much for document editing and code browsing.
There is a way to assign keyboard combos to buttons, but again, the software could understand precisely none of the combos I wanted to record when I typed them on my Macally. So I went to the MacBook Pro’s built-in keyboard, where I was able to register ⌥⇥, ⌘→, and ⌘←. I could not, however much I tried, register Page Up or Page Down. I pressed Fn, which showed “Fn” in the LGS software, and then pressed the down arrow for Page Down, and as long as I held down both keys, it showed “Page Down”. But as soon as I let go of the down arrow, “Fn” was registered again. No Page Down for me.
Now, recall, this was happening on the laptop’s built-in keyboard. I can’t really blame this one on age of the external Macally. I really think this one might fall on LGS itself; while a 2013 MacBook is old, it’s not that old.
I thought I might be stuck, but I intuited a workaround: I opened the Keyboard Viewer app built into the Finder. With that, I could just click the virtual Page Up and Page Down keys, and LGS registered them without a hiccup. While I was in there, I used it to set the scroll wheel’s middle-button click to trigger Mission Control (F3).
The following key-repeat problem has been fixed and was not the fault of the MX518; see my comment for details on how I resolved it. The one letdown I have is that the buttons don’t appear to repeat keystrokes. So if I hold the button I’ve assigned to Page Down for example, I get exactly one page-down, and that’s it until I release and click the button again. On the MX500, holding down the button assigned to Page Down would just constantly page down until I let go. This was sometimes preferable to scrolling with the scroll wheel, especially for long documents I wanted to very quickly scan for a certain figure or other piece of the page. The same was true for all the buttons: hold it down, and the thing it was configured to do happened repeatedly until you let go.
The MX518 Legendary isn’t doing that. I don’t know if this is an inherent limitation of the mouse, its software, my configuration of it, the interaction of software and operating system, or something else entirely. It’s not an issue forty-nine times out of fifty, but that fiftieth time is annoying.
The other annoyance is one of possibly missed potential. The mouse software has, in keeping with its gaming focus, the ability to set up multiple profiles; that way, you can assign unique actions to the buttons on a per-application basis. I set up a couple of profiles to test it out, but LGS is completely opaque about how to make profiles switch automatically when you switch to an app. I’ll look for an answer online, but it’s annoying that the software promises per-app profiles, and then apparently fails to deliver on that promise.
So after all that, am I happy? Yes. It’s essentially my old mouse, except brand new. My heartfelt thanks to Logitech for bringing this workhorse out of retirement. I look forward to a decade or more with it.
Back in 2015, I wrote about Firefox’s screenshot utility, which used to be a command in the GCLI. Well, the GCLI is gone now, but the coders at Mozilla have brought command-line screenshotting back with :screenshot, currently available in Firefox Nightly and Firefox Dev Edition. It’s available in the Web Console (⌥⌘K or Tools → Web Developer → Console).
Once you’re in the Web Console, you can type :sc and then hit Tab to autocomplete :screenshot. From there, everything is the same as I wrote in 2015, with the exception that the --imgur and --chrome options no longer exist. There are plans to add uploading to Firefox Screenshots as a replacement for the old Imgur option, but as of this writing, that’s still in the future.
So the list of :screenshot options as of late August 2018 is:
--clipboard
Copies the image to your OS clipboard for pasting into other programs. Prevents saving to a file unless you use the --file option to force file-writing.
--delay
The time in seconds to wait before taking the screenshot; handy if you want to pop open a menu or invoke a hover state for the screenshot. You can use any number, not just integers.
--dpr
The Device Pixel Ratio (DPR) of the captured image. Values above 1 yield “zoomed-in” images; values below 1 create “zoomed-out“ results. See the original article for more details.
--fullpage
Captures the entire page, not just the portion of the page visible in the browser’s viewport. For unusually long (or wide) pages, this can cause problems like crashing, not capturing all of the page, or just failing to capture anything at all.
--selector
Accepts a CSS selector and captures only that element and its descendants.
--file
When true, forces writing of the captured image to a file, even if --clipboard is also being used. Setting this to false doesn’t seem to have any effect.
--filename
Allows you to set a filename rather than accept the default. Explicitly saying --filename seems to be optional; I find that writing simply :screenshot test yields a file called test.png, without the need to write :screenshot --filename test. YFFMV.
I do have one warning: if you capture an image to a filename like test.png, and then you capture to that same filename, the new image will overwrite the old image. This can bite you if you’re using the up-arrow history scroll to capture images in quick succession, and forget to change the filename for each new capture. If you don’t supply a filename, then the file’s name uses the pattern of your OS screen capture naming; e.g., Screen Shot 2018-08-23 at 16.44.41.png on my machine.
I still use :screenshot to this day, and I’m very happy to see it restored to the browser — thank you, Mozillans! You’re the best.
I said yesterday I would write up the process of adding Grid to meyerweb, and I did. I started it last night and I finished it this morning, and when I was done with the first draft I discovered I’d written almost four thousand words.
So I pitched it to an online magazine, and they accepted, so it should be ready in the next couple of weeks. Probably not long after Chrome ships its Grid implementation into public release, in fact. I’ll certainly share the news when it’s available.
In the meantime, you can inspect live grids for yourself, whether here or at Grid by Example or The Experimental Layout Lab of Jen Simmons or wherever else. All you need is Firefox 52, though Firefox Nightly is recommended for reasons I’ll get to in a bit.
In Firefox 52, if you inspect an element that has display: grid assigned to it, you’ll get a little waffle icon in the inspector, like so:
Click it, and Firefox will draw the grid lines on the page for you in a lovely shade of purple. It will even fill in grid gaps (which are a thing) with a crosshatch-y pattern. It’s a quick way to visualize what the grid’s template values are creating.
If you have Firefox Nightly, there’s an even more powerful tool at your disposal. First, go into the inspector’s settings, and make sure “Enable layout panel” is checked. You may or may not have to restart the browser at this point — I did, but YEMV — but once it’s up and running, there will be a “Layout” panel to go with the other panels on the right side of the Inspector. There you get the box model stuff, as well as a checklist of grids on the current page.
For each grid on the page — not just the element you’re inspecting — you can set your own color for the gridlines, though those color choices do not currently persist, even across page reloads. You can also turn on number labels for the grid lines, which are currently tiny and black no matter what you do. And if you allow grid lines to extend into infinity, you can turn the page into a blizzard of multicolored lines, assuming there are several grids present.
This panel is very much in its infancy, so we can expect future enhancements. Things like color persistence and better grid line labels are already on the to-do list, I’m told, as well as several more ambitious features. Even as it is, I find it valuable in constructing new grids and diagnosing the situation when things go sideways. (Occasionally, literally sideways: I was playing with writing-mode in grid contexts today.)
There’s another, possibly simpler, way to enable the Layout panel, which I learned about courtesy Andrei Petcu. You type about:config into the URL bar, then enter layoutview into the search field. Double-click “devtools.layoutview.enabled” to set it to “true”, and it will be ready to go. Thanks, Andrei!
Yesterday, I was looking at an existing page, wondering if it would be improved by rearranging some of the elements. I was about to fire up the git engine (spawn a branch, check it out, do edits, preview them, commit changes, etc., etc.) when I got a weird thought: could I just drag elements around in the Web Inspector in my browser of choice, Firefox Nightly, so as to quickly try out various changes without having to open an editor? Turns out the answer is yes, as demonstrated in this video!
Since I recorded the video, I’ve learned that this same capability exists in public-release Firefox, and has been in Chrome for a while. It’s probably been in Firefox for a while, too. What I was surprised to find was how many other people were similarly surprised that this is possible, which is why I made the video. It’s probably easier to understand to video if it’s full screen, or at least expanded, but I think the basic idea gets across even in small-screen format. Share and enjoy!
Throughout 2015, a few people who’ve seen me present “Designing for Crisis” at An Event Apart have noticed that, on the slides where I have filler text, it’s a localized variant. In Washington, DC, for example, one section started out:
Andrew ellicott lobortis decima thomas jefferson vulputate dynamicus fiant kingman park sollemnes ford’s theater. Vero videntur modo claritatem possim quis quod est noma howard university consequat diam. Blandit ut claram north michigan park seacula judiciary square william jefferson clinton hawthorne millard fillmore iis…
This was a product of some simple PHP I’d originally written to generate Cleveland-themed filler text a year or so back, which you can find at localipsum.meyerweb.com, and which I’d expanded upon to let me generate text for my presentations at AEA. The name comes from the original idea I had, which was to provide a list of cities/regions/whatever, so that users could pick one and generate some filler text. That never quite came together. I had a semi-working version once, but the UI was horrible and the file management was worse and I got discouraged and rolled back to what you see now.
I kept telling myself that I’d get back to it, do the city-selection thing right, polish it nicely, and then finally release the source. I’ve told myself that all year, as I manually swapped in city information to generate the filler text for each of my talks. Now I’ve finally admitted to myself that it isn’t going to happen, so: here’s the source. I chose a pretty permissive license — BSD-ISC, if I recall correctly — so feel free to make use of it for your own filler text. I’ll be happy to accept pull requests with improvements, but not package-management or complete MVC restructuring. Sorry.
I know, it’s a goofy little thing and the code is probably pants, but I kinda like it and figure maybe someone out there will too. If nothing else, we can look for a few laughs in the output and maybe — just maybe — learn a little something about ourselves along the way.
(P.S. Speaking of “Designing for Crisis”, look for a post about that, and more specifically video of it, in the next few weeks.)
The Graphical Command Line Interpreter (GCLI) has been removed from Firefox, but screenshot has been revived as :screenshot in the Web Console (⌥⌘K), with most of the same options discussed below. Thus, portions of this article have become incorrect as of August 2018. Read about the changes in my post Firefox’s :screenshot Command.
Everyone has their own idiosyncratic collection of tools they can’t work without, and I’ve recently been using one of mine as I produce figures for CSS: The Definitve Guide, Fourth Edition (CSS:TDG4e). It’s Firefox’s command-line screenshot utility.
To get access to screenshot, you first have to hit ⇧F2 for the Developer Toolbar, not⌥⌘K for the Web Console. (I know, two command lines — who thought that was a good idea? Moving on.) Once you’re in the Developer Toolbar, you can type s and then hit Tab to autocomplete screenshot. Then type a filename for your screenshot, if you want to define it, either with or without the file extension; otherwise you’ll get whatever naming convention your computer uses for screen captures. For example, mine does something like Screen Shot 2015-10-22 at 10.05.51.png by default. If you hit [return] (or equivalent) at this point, it’ll save the screenshot to your Downloads folder (or equivalent). Done!
Except, don’t do that yet, because what really makes screenshot great is its options; in my case, they’re what elevate screenshot from useful to essential, and what set it apart from any screen-capture addon I’ve ever seen.
The option I use a lot, particularly when grabbing images of web sites for my talks, is --fullpage. That option captures absolutely everything on the page, even the parts you can’t see in the browser window. See, by default, when you use screenshot, it only shows you the portion of the page visible in the browser window. In many cases, that’s all you want or need, but for the times you want it all, --fullpage is there for you. Any time you see me do a long scroll of a web page in a talk, like I did right at the ten-minute mark of my talk at Fluent 2015, it was thanks to --fullpage.
If you want the browser --chrome to show around your screenshot, though, you can’t capture the --fullpage. Firefox will just ignore the -fullpage option if you invoke --chrome, and give you the visible portion of the page surrounded by your browser chrome, including all your addon icons and unread tabs. Which makes some sense, I admit, but part of me wishes someone had gone to the effort of adding code to redraw the chrome all the way around a --fullpage capture if you asked for it.
Now, for the purposes of CSS:TDG4e’s figures, there are two screenshot options that I cannot live without.
The first is --selector, which lets you supply a CSS selector to an element — at which point, Firefox will capture just that element and its descendants. The only, and quite understandable, limitation is that the selector you supply must match a single element. For me, that’s usually just --selector 'body', since every figure I create is a single page, and there’s nothing in the body except what I want to include in the figure. So instead of trying to drag-select a region of the screen with ⇧⌘4, or (worse) trying to precisely size the browser window to show just the body element and not one pixel more, I can enter something like screenshot fig047 --selector 'body' and get precisely what I need.
That might seem like a lot to type every time, but the thing is, I don’t have to: not only does the Web Toolbar have full tab-autocomplete, the Toolbar also offers up-arrow history. So once I’ve tab-completed the command to capture my first figure, I just use the up arrow to bring the command back and change the file name. Quick, simple, efficient.
The second essential option for me is --dpr, which defines a device pixel ratio. Let’s say I want to capture something at four times the usual resolution. --dpr 4 makes it happen. Since all my figures are meant to go to print as well as ebooks, I can capture at print-worthy resolutions without having to use ⌘+ to blow up the content, or fiddle with using CSS to make everything bigger. Also if I want to go the other way and capture a deliberately pixellated version of a page, I can use something like --dpr 0.33.
I have used this occasionally to size down an image online: I “View Image” to get it in its own window, then use screenshot with a fractional DPR value to shrink it. Yes, this is a rare use case, even for me, but hey — the option exists! I haven’t used the DPR option for my talks, but given the growing use of HD 16:9 projectors — something we’ve been using at An Event Apart for a while now, actually — I’m starting to lean toward using --dpr 2 to get sharper images.
(Aside: it turns out this option is only present in very recent versions of Firefox, such as Developer Edition 43 and the current Nightlies. So if you need DPR, grab a Nightly and go crazy!)
And that’s not all! You can set a --delay in seconds, to make sure a popup menu or other bit of interaction is visible before the capture happens. If you want to take your captured image straight into another program before saving it, there’s --clipboard. And there’s an option to upload straight to --imgur, though I confess I haven’t figured out how that one works. I suspect you have to be logged into imgur first. If anyone knows, please leave a comment so the rest of us know how to use it!
The one thing that irks me a little bit about screenshot is that the file name must come before the options. When I’m producing a bunch of figures in a row, having to drag-select just the file name for replacement is a touch tedious; I wish I could put the file name at the end of the command, so I could quickly drag-select it with a rightward wrist-flick. But all things considered, this is a pretty minor gripe. Well, shut my mouth and paint me red — it turns out you can put the filename after the options. Either that wasn’t possible at some point and I never retested the assertion, or it was always possible and I just messed up. Either way, this irk is irksome no more!
The other thing I wish screenshot could do is let me define a precise width or height in pixels — or, since I’m dreaming, a value using any valid CSS length unit — and scale the result to that measure. This isn’t really useful for the CSS:TDG4e figures, but it could come in pretty handy for creating talk slides. No, I have no idea how that would interact with the DPR option, but I’d certainly be willing to find out.
So that’s one of my “unusual but essential” tools. What’s yours?
As I was reading an article with a few scattered apostrophe errors, I wished that I could highlight each one, hit a report button, and know that the author had been notified of the errors so that they could fix them. No requirement to leave a comment chastising them for bad grammar, replete with lots of textual context so they could find the errors — just a quick “hey, I spotted this error, now you know so you can fix it” notice, sent in private to them.
Then I realized that I wanted that for my own site, to let people tell me when I had gaffes in need of repair. It’s an almost-wiki, where the crowd can flag errors that need to be corrected without having to edit the source themselves — or have the power to edit it themselves, for that matter, which is an open door for abuse.
I haven’t thought this through in tons of detail, but here’s how it feels in my head:
Visitors highlight a typo and click a button to report it. Or else click a button to start reporting, highlight a word, and click again to submit. This part is kind of fuzzy in my head, and yes, “click” is not the best term here, but it’s one we all understand.
Interesting extra feature: the ability to classify the type of error when reporting. For example: apostrophe, misspelling, parallelism, pronoun trouble.
Other interesting extra feature: the ability to inform users of the ground rules before they report. For example: “This site uses British punctuation rules, the Oxford comma, and American spelling.” (Which I do.)
The author gets notice whenever an error is reported, or else can opt for a daily digest.
Each notice lets the author quickly accept or reject the reported error, much as can be done with edits in MS Word and similar programs, along with a link that will jump the author straight to the reported error so they can see it in context. If rejected, future reports of that word are disabled. If accepted, the change is made immediately, without requiring a dive into the CMS.
When an error is reported, future visitors to the site will see any already-reported errors in highlight. This keeps them from reporting the same thing over and over, and also acts as incentive to the author to fix errors quickly. (The highlight style could be customizable.)
Reports can only happen at the word level, not the individual letter level. So reporting an “it’s” error highlights all of “it’s”, not just the offending apostrophe. Perhaps also for multiple words, though only up to a certain number, like three. And yes, I’m keenly aware of the challenges of defining a “word” in an internationally-aware manner, but perhaps in ideographic languages you switch to per-symbol. (Not an expert here, so take that with a few grinders of salt.)
The author can optionally limit the number of reports permitted per hour/day/whatever. This could be enforced globally or on a per-user basis, though globally is a tad more robust.
That’s how I see it working, after a few minutes’ thought. It seems pretty achievable as a CMS plugin, actually, though I confess that I don’t have anywhere close to the time and coding chops needed to make it happen right now (or any time soon). The biggest challenge to me seems like the “edit-on-accept-without-CMS-diving” part, since there are so many CMSes and particularly since static sites are staging a comeback. Still, I think it would be a fun and worthwhile project for someone out there. If somebody takes it on, I’d love to follow along and see where it ends up, particularly if they do it for WordPress (which is what the blog hereabouts runs on).