Throughout 2015, a few people who’ve seen me present “Designing for Crisis” at An Event Apart have noticed that, on the slides where I have filler text, it’s a localized variant. In Washington, DC, for example, one section started out:
Andrew ellicott lobortis decima thomas jefferson vulputate dynamicus fiant kingman park sollemnes ford’s theater. Vero videntur modo claritatem possim quis quod est noma howard university consequat diam. Blandit ut claram north michigan park seacula judiciary square william jefferson clinton hawthorne millard fillmore iis…
This was a product of some simple PHP I’d originally written to generate Cleveland-themed filler text a year or so back, which you can find at localipsum.meyerweb.com, and which I’d expanded upon to let me generate text for my presentations at AEA. The name comes from the original idea I had, which was to provide a list of cities/regions/whatever, so that users could pick one and generate some filler text. That never quite came together. I had a semi-working version once, but the UI was horrible and the file management was worse and I got discouraged and rolled back to what you see now.
I kept telling myself that I’d get back to it, do the city-selection thing right, polish it nicely, and then finally release the source. I’ve told myself that all year, as I manually swapped in city information to generate the filler text for each of my talks. Now I’ve finally admitted to myself that it isn’t going to happen, so: here’s the source. I chose a pretty permissive license — BSD-ISC, if I recall correctly — so feel free to make use of it for your own filler text. I’ll be happy to accept pull requests with improvements, but not package-management or complete MVC restructuring. Sorry.
I know, it’s a goofy little thing and the code is probably pants, but I kinda like it and figure maybe someone out there will too. If nothing else, we can look for a few laughs in the output and maybe — just maybe — learn a little something about ourselves along the way.
(P.S. Speaking of “Designing for Crisis”, look for a post about that, and more specifically video of it, in the next few weeks.)
Everyone has their own idiosyncratic collection of tools they can’t work without, and I’ve recently been using one of mine as I produce figures for CSS: The Definitve Guide, Fourth Edition (CSS:TDG4e). It’s Firefox’s command-line screenshot utility.
To get access to screenshot, you first have to hit ⇧F2 for the Developer Toolbar, not⌥⌘K for the Web Console. (I know, two command lines — who thought that was a good idea? Moving on.) Once you’re in the Developer Toolbar, you can type s and then hit Tab to autocomplete screenshot. Then type a filename for your screenshot, if you want to define it, either with or without the file extension; otherwise you’ll get whatever naming convention your computer uses for screen captures. For example, mine does something like Screen Shot 2015-10-22 at 10.05.51.png by default. If you hit [return] (or equivalent) at this point, it’ll save the screenshot to your Downloads folder (or equivalent). Done!
Except, don’t do that yet, because what really makes screenshot great is its options; in my case, they’re what elevate screenshot from useful to essential, and what set it apart from any screen-capture addon I’ve ever seen.
The option I use a lot, particularly when grabbing images of web sites for my talks, is --fullpage. That option captures absolutely everything on the page, even the parts you can’t see in the browser window. See, by default, when you use screenshot, it only shows you the portion of the page visible in the browser window. In many cases, that’s all you want or need, but for the times you want it all, --fullpage is there for you. Any time you see me do a long scroll of a web page in a talk, like I did right at the ten-minute mark of my talk at Fluent 2015, it was thanks to --fullpage.
If you want the browser --chrome to show around your screenshot, though, you can’t capture the --fullpage. Firefox will just ignore the -fullpage option if you invoke --chrome, and give you the visible portion of the page surrounded by your browser chrome, including all your addon icons and unread tabs. Which makes some sense, I admit, but part of me wishes someone had gone to the effort of adding code to redraw the chrome all the way around a --fullpage capture if you asked for it.
Now, for the purposes of CSS:TDG4e’s figures, there are two screenshot options that I cannot live without.
The first is --selector, which lets you supply a CSS selector to an element — at which point, Firefox will capture just that element and its descendants. The only, and quite understandable, limitation is that the selector you supply must match a single element. For me, that’s usually just --selector 'body', since every figure I create is a single page, and there’s nothing in the body except what I want to include in the figure. So instead of trying to drag-select a region of the screen with ⇧⌘4, or (worse) trying to precisely size the browser window to show just the body element and not one pixel more, I can enter something like screenshot fig047 --selector 'body' and get precisely what I need.
That might seem like a lot to type every time, but the thing is, I don’t have to: not only does the Web Toolbar have full tab-autocomplete, the Toolbar also offers up-arrow history. So once I’ve tab-completed the command to capture my first figure, I just use the up arrow to bring the command back and change the file name. Quick, simple, efficient.
The second essential option for me is --dpr, which defines a device pixel ratio. Let’s say I want to capture something at four times the usual resolution. --dpr 4 makes it happen. Since all my figures are meant to go to print as well as ebooks, I can capture at print-worthy resolutions without having to use ⌘+ to blow up the content, or fiddle with using CSS to make everything bigger. Also if I want to go the other way and capture a deliberately pixellated version of a page, I can use something like --dpr 0.33.
I have used this occasionally to size down an image online: I “View Image” to get it in its own window, then use screenshot with a fractional DPR value to shrink it. Yes, this is a rare use case, even for me, but hey — the option exists! I haven’t used the DPR option for my talks, but given the growing use of HD 16:9 projectors — something we’ve been using at An Event Apart for a while now, actually — I’m starting to lean toward using --dpr 2 to get sharper images.
And that’s not all! You can set a --delay in seconds, to make sure a popup menu or other bit of interaction is visible before the capture happens. If you want to take your captured image straight into another program before saving it, there’s --clipboard. And there’s an option to upload straight to --imgur, though I confess I haven’t figured out how that one works. I suspect you have to be logged into imgur first. If anyone knows, please leave a comment so the rest of us know how to use it!
The one thing that irks me a little bit about screenshot is that the file name must come before the options. When I’m producing a bunch of figures in a row, having to drag-select just the file name for replacement is a touch tedious; I wish I could put the file name at the end of the command, so I could quickly drag-select it with a rightward wrist-flick. But all things considered, this is a pretty minor gripe.
The other thing I wish screenshot could do is let me define a precise width or height in pixels — or, since I’m dreaming, a value using any valid CSS length unit — and scale the result to that measure. This isn’t really useful for the CSS:TDG4e figures, but it could come in pretty handy for creating talk slides. No, I have no idea how that would interact with the DPR option, but I’d certainly be willing to find out.
So that’s one of my “unusual but essential” tools. What’s yours?
As I was reading an article with a few scattered apostrophe errors, I wished that I could highlight each one, hit a report button, and know that the author had been notified of the errors so that they could fix them. No requirement to leave a comment chastising them for bad grammar, replete with lots of textual context so they could find the errors — just a quick “hey, I spotted this error, now you know so you can fix it” notice, sent in private to them.
Then I realized that I wanted that for my own site, to let people tell me when I had gaffes in need of repair. It’s an almost-wiki, where the crowd can flag errors that need to be corrected without having to edit the source themselves — or have the power to edit it themselves, for that matter, which is an open door for abuse.
I haven’t thought this through in tons of detail, but here’s how it feels in my head:
Visitors highlight a typo and click a button to report it. Or else click a button to start reporting, highlight a word, and click again to submit. This part is kind of fuzzy in my head, and yes, “click” is not the best term here, but it’s one we all understand.
Interesting extra feature: the ability to classify the type of error when reporting. For example: apostrophe, misspelling, parallelism, pronoun trouble.
Other interesting extra feature: the ability to inform users of the ground rules before they report. For example: “This site uses British punctuation rules, the Oxford comma, and American spelling.” (Which I do.)
The author gets notice whenever an error is reported, or else can opt for a daily digest.
Each notice lets the author quickly accept or reject the reported error, much as can be done with edits in MS Word and similar programs, along with a link that will jump the author straight to the reported error so they can see it in context. If rejected, future reports of that word are disabled. If accepted, the change is made immediately, without requiring a dive into the CMS.
When an error is reported, future visitors to the site will see any already-reported errors in highlight. This keeps them from reporting the same thing over and over, and also acts as incentive to the author to fix errors quickly. (The highlight style could be customizable.)
Reports can only happen at the word level, not the individual letter level. So reporting an “it’s” error highlights all of “it’s”, not just the offending apostrophe. Perhaps also for multiple words, though only up to a certain number, like three. And yes, I’m keenly aware of the challenges of defining a “word” in an internationally-aware manner, but perhaps in ideographic languages you switch to per-symbol. (Not an expert here, so take that with a few grinders of salt.)
The author can optionally limit the number of reports permitted per hour/day/whatever. This could be enforced globally or on a per-user basis, though globally is a tad more robust.
That’s how I see it working, after a few minutes’ thought. It seems pretty achievable as a CMS plugin, actually, though I confess that I don’t have anywhere close to the time and coding chops needed to make it happen right now (or any time soon). The biggest challenge to me seems like the “edit-on-accept-without-CMS-diving” part, since there are so many CMSes and particularly since static sites are staging a comeback. Still, I think it would be a fun and worthwhile project for someone out there. If somebody takes it on, I’d love to follow along and see where it ends up, particularly if they do it for WordPress (which is what the blog hereabouts runs on).
In the course of expanding my documentation of color values, I failed to find a table that listed all 147 SVG-and-CSS3-defined keywords along with the equivalent RGB decimal, RGB percent, HSL, hexadecimal, and (when valid) short-hex values. There were some tables that listed some but not all of those value types, and one that listed all the value types (plus CMYK) along with a few hundred other keywords, but none that listed all of the CSS keywords and value types. And none that I saw used precise values for the RGB percent and HSL types, preferring instead to round off at the expense of some subtle differences in color.
So I created my own table, which you can now find in the CSS area of meyerweb. Most of it is dynamically generated, taking a list of keywords and RGB decimal equivalents and then calculating the rest of the values from there. The presentation is still a work in progress, and may change a little or a lot. In particular, if I can’t find a better algorithm for determining which rows should have white text instead of black, I’ll probably shift away from the full-row background colors to some other presentation.
My thanks to Tab Atkins for his donation of the RGB-to-HSL routine I adapted as well as help improving the pick-light-or-dark-text algorithm; and to the people who responded positively on Twitter when I mused about the idea there.
I have this very odd problem that seems to be some combination of PDF, Acrobat, Outlook, Thunderbird, and maybe even IMAP and GMail. I know, right?
The problem is that certain PDFs sent to me by a single individual won’t open at first. I’ll get one as an email attachment. I drag the attachment to a folder in my (Snow Leopard) Finder and double-click it to open. The error dialog I immediately get from Acrobat Professional is:
There was an error opening this document. The file is damaged and could not be repaired.
Preview, on the other hand, tells me:
The file “[redacted]” could not be opened. It may be damaged or use a file format that Preview doesn’t recognize.
When this happens, I tell the person who sent me the file that The Problem has happened again. She sends me the exact same file as an attachment. Literally, she just takes the same file she sent before and drags it onto the new message to send to me again.
And this re-sent file opens without incident. Every time. Furthermore, extra re-sends open without incident. I recently had her send me the same initially damaged file five times, some attached to replies and others to brand-new messages. All of them opened flawlessly. The initially damaged file remained damaged.
Furthermore, if I go through the GMail web interface, I can view the initial attached PDF (the one my OS X applications say is damaged) through the GMail UI without trouble. If I download that attachment to my hard drive, it similarly opens in Acrobat (and Preview) without trouble.
A major indication of damage: that first download is a different size than all the others. In the most recent instance, the damaged file is 680,302 bytes. The undamaged files are all 689,188 bytes. If only I knew why it’s damaged the first time, and not all the others!
So far, I’ve yet to see this happen with PDFs from anyone else, but then I receive very few attached PDFs from people other than this one (our events manager at An Event Apart, who sends and receives PDFs and Office documents like they’re conversational speech — an occupational hazard of her line of work), and it only seems to happen with PDFs of image scans that she’s created. Other types of PDFs, whether she generated them or not, seem to come through fine; ditto for other file types, like Word documents. I’d be tempted to blame the scanning software, but again: the exact same file is damaged the first time, and fine on every subsequent re-attachment.
I’ve done some Googling, and found scattered advice on ways clear up corrupted-PDF-attachment problems in Thunderbird. I’ve followed these pieces of advice, and nothing has helped. In summary, I have so far:
Set mail.server.default.fetch_by_chunks to false.
Set mail.imap.mime_parts_on_demand to false.
Set mail.server.default.mime_parts_on_demand to false.
Tried the Thunderbird extension OPENATTACHMENTBYEXTENSION. That failed, and so I immediately uninstalled it because handling files by extension alone is just asking to be pwned, regardless of your operating system or personal level of datanoia. (I wouldn’t have left it installed had it worked; I just wanted to see if it did work as a data point.)
Here’s what I know about the various systems in play here:
I’m using Thunderbird 11.0.1 on OS X 10.6.8.
The attachments are always sent via Outlook 2010 on Windows 7.
The software used for the scanning is the HP scanning software that was installed with the scanner. Scans are saved to the hard drive, renamed, and then manually attached to the email. On resend, the same file is manually attached to the email.
A little while back, I was reading some text when I realized the hyphens didn’t look quite right. A little too wide, I thought. Not em-dash wide, but still…wide. Wide-ish? But when I copied some of the text into a BBEdit window, they looked just like the hyphens I typed into the document.
Of course, I know Unicode is filled with all manner of symbols and that the appearance of those symbols can vary from one font face to another. So I changed the font face, made the size really huge, and behold: they were indeed different characters. At this point, I was really curious about what I’d found. What exactly was it? How would I find out?
For the record, here’s the character in question:
Googling “−” and “− Unicode” got me nothing useful. I knew I could try the Character Viewer in OS X, and eventually I did, but I was wondering if there was a better (read: lazier) solution. I asked the Twittersphere for advice, and while I don’t know if these solutions are any lazier, here are the best of the suggestions I received.
Unicode Lookup, a site that lets you input or paste in any character and get a report on what it is and how one might call it in various encodings.
Richard Ishida’s UniView Lite, which does much the same as Unicode Lookup with the caveat that once you’ve input your character, you have to hit the “Chars” button, not the “Search” button. The latter is apparently how you search Unicode character names for a word or other string, like “dash” or “quot”.
UnicodeChecker (OS X), a nice utility that includes a character list pane as well as the ability to type or paste a character into an input and instantly get its gritty details.
Any of those will tell you that the − in question is MINUS SIGN, codepoint 8722 (decimal) / 2212 (UTF-16 hex) / U+2212 (Unicode hex) / et cetera, et cetera. Did you know it was designated in Unicode 1.1? Now you do, thanks to UnicodeChecker and this post. You’re welcome.
Update 2 Mar 12: Philippe Wittenberg points out in the comments that you can add a UnicodeChecker service. With that enabled, all you have to do is highlight a character, summon the contextual menu (right-click, for most of us), and have it shown in UnicodeChecker. Now that’s the kind of laziness I was trying to attain!
Over the weekend, Aaron Gustafson and I created a tool for anyone who wants to resolve a series of CSS transforms into a matrix() value representing the same end state. Behold: The Matrix Resolutions. (You knew that was coming, right?) It should work fine in various browsers, though due to the gratuitous use of keyframe animations on the html element’s multiple background images it looks best in WebKit browsers.
The way it works is you input a series of transform functions, such as translateX(22px) rotate(33deg) scale(1.13). The end-state and its matrix() equivalent should update whenever you hit the space bar or the return key, or else explicitly elect to take the red pill. If you want to wipe out what you’ve input and go back to a state of blissful ignorance, take the blue pill.
There is one thing to note: the matrix() value you get from the tool is equivalent to the end-state placement of all the transforms you input. That value most likely does not create an equivalent animation, particularly if you do any rotation. For example, animating translateX(75px) rotate(1590deg) translateY(-75px) will not appear the same as animating matrix(-0.866025, 0.5, -0.5, -0.866025, 112.5, 64.9519). The two values will get the element to the same destination, but via very different paths. If you’re just transforming, not animating, then that’s irrelevant. If you are, then you may want to stick to the transforms.
So anyway, there you go. If you want to see the matrix(), remember: we can only show you the door. You’re the one that has to walk through it.
Earlier today, I updated the CSS Tools: Reset CSS page to list the final version of Reset v2.0, as well as updated the reset.css file in that directory to be v2.0. (I wonder how many hotlinkers that will surprise.) In other words, it’s been shipped. Any subsequent changes will trigger version number changes.
There is one small change I made between 2.0b2 and 2.0 final, which is the replacement of the “THIS IS BETA” warning text with an explicit lack of license. The reset CSS has been in the public domain ever since I first published it, and the Reset CSS page explicitly said it was, but the file itself never said one way or the other. Now it does.
Thanks to everyone who contributed their thoughts and perspectives on the new reset. Here’s to progress!