Thoughts From Eric Archive

Adding Pandoc Arguments in BBEdit

Published 3 years, 7 months past

Thanks to the long and winding history of my blog, I write posts in Markdown in BBEdit, export them to HTML, and paste the resulting HTML into WordPress. I do it that way because switching WordPress over to auto-parsing Markdown in posts causes problems with rendering the markup of some posts I wrote 15-20 years ago, and finding and fixing every instance is a lengthy project for which I do not have the time right now.

(And I don’t use the block editor because whenever I use it to edit an old post, the markup in those posts get mangled so much that it makes me want to hurl. This is as much the fault of my weird idiosyncratic bespoke-ancient setup as of WordPress itself, but it’s still super annoying and so I avoid it entirely.)

Anyway, the point here is that I write Markdown in BBEdit, and export it from there. This works okay, but there have always been things missing, like a way to easily add attributes to elements like my code blocks. BBEdit’s default Markdown exporter, CommonMark, sort of supports that, except it doesn’t appear to give me control over the class names: telling it I want a class value of css on a preformatted block means I get a class value of language-css instead. Also it drops that class value on the code element it inserts into the pre element, instead of attaching it directly to the pre element. Not good, unless I start using Prism, which I may one day but am not yet.

Pandoc, another exporter you can use in BBEdit, offers much more robust and yet simple element attribute attachment: you put {.class #id} or whatever at the beginning of any element, and you get those things attached directly to the element. But by default, it also wraps elements around, and adds attributes to, the pre element, apparently in anticipation of some other kind of syntax highlighting.

I spent an hour reading the Pandoc man page (just kidding, I was actually skimming, that’s the only way I could possibly get through all that in an hour) and found the --no-highlight option. Perfect! So I dropped into Preferences > Languages > Language-specific settings:Markdown > Markdown, set the “Markdown processor” dropdown to “Custom”, and filled in the following:

Command pandoc
Arguments --no-highlight

Done and done. I get a more powerful flavor of Markdown in an editor I know and love. It’s not perfect — I still have to manually tweak table markup by hand, for example — but it’s covering probably 95% of my use cases for writing blog posts.

Now all I need to do is find a Pandoc Markdown option or extensions or whatever that keeps it from collapsing the whitespace between elements in its HTML output, and I’ll be well and truly satisfied.


Scripted Server Startup for MDN and WPT

Published 3 years, 8 months past

A sizeable chunk of my work at Igalia so far involves editing and updating the Mozilla Developer Network (MDN), and a smaller chunk has me working on the Web Platform Tests (WPT).  In both cases, the content is stored in large public repositories (MDN, WPT) and contributors are encouraged to fork the repositories, clone them locally, and push updates via the fork as PRs (Pull Requests).  And while both repositories roll in localhost web server setups so you can preview your edits locally, each has its own.

As useful as these are, if you ignore the whole “auto-force a browser page reload every time the file is modified in any way whatsoever” thing that I’ve been trying very hard to keep from discouraging me from saving often, each has to be started in its own way, from within their respective repository directories, and it’s generally a lot more convenient to do so in a separate Terminal window.

I was getting tired of constantly opening a new Terminal window, cding into the correct place, remembering the exact invocation needed to launch the local server, and on and on, so I decided to make my life slightly easier with a few short scripts and aliases.  Maybe this will be useful to you as well.

First, I decided to keep things relatively simple.  Instead of writing a small program that would handle all server startups by parsing shell arguments and what have you, I wrote a couple of very similar shell scripts.  Here’s the script for launching MDN’s localhost:

#!/bin/bash
cd ~/repos/mdn/content/
yarn start

Then I added an alias to ~/.bashrc which employs a technique I swiped from this Stack Overflow answer.

alias mdn-server="open -a Terminal.app ~/bin/mdn-start.bsh"

Translated into English, that means “open the file ~/bin/mdn-start.bsh using the -application Terminal.app”.

Thus, when I type mdn-server in any command prompt, a new Terminal window will open and the shell script mdn-start.bsh will be run; the script switches into the needed directory and launches the localhost server using yarn, as per the MDN instructions.  What’s more, when I’m done working on MDN, I can switch to the window running the server, stop the server with ⌃C (control-C), and the Terminal window closes automatically.

I did something very similar for WPT, except in this case the alias reads:

alias wpt-server="open -a Terminal.app ~/bin/wpt-serve.bsh"

And the script to which it points reads:

#!/bin/bash
cd ~/repos/wpt/
./wpt serve

As I mentioned before, I chose to do it this way rather than writing a single alias (say, local-server) that would accept arguments (mdn, wpt, etc.) and fire off scripts accordingly, but that’s also an option and a viable one at that.

So that’s my little QoL (Quality of Life) upgrade to make working on MDN and WPT a little easier.  I hope it helps you in some way!


First Month at Igalia

Published 3 years, 9 months past

Today marks one month at Igalia.  It’s been a lot, and there’s more to come, but it’s been a really great experience.  I get to do things I really enjoy and value, and Igalia supports and encourages all of it without trying to steer me in specific directions.  I’ve been incredibly lucky to experience that kind of working environment twice in my life — and the other one was an outfit I helped create.

Here’s a summary of what I’ve been up to:

  • Generally got up to speed on what Igalia is working on (spoiler: a lot).
  • Redesigned parts of wpewebkit.org, fixed a few outstanding bugs, edited most of the rest. (The site runs on 11ty, so I’ve been learning that as well.)
  • Wrote a bunch of CSS tests/demos that will form the basis for other works, like articles and videos.
  • Drafted a few of said articles.  As I write this, two are very close to being complete, and a third is almost ready for editing.
  • Edited some pages on the Mozilla Developer Network (MDN), clarifying or upgrading text in some places and replacing unclear examples in others.
  • Joined the Open Web Docs Steering Committee.
  • Reviewed various specs and proposals (e.g., Miriam’s very interesting @scope proposal).

And that’s not all!  Here’s what I have planned for the next few months:

  • More contributions to MDN, much of it in the CSS space, but also branching out into documenting some up-and-coming APIs in areas that are fairly new to me.  (Details to come!)
  • Contributions to the Web Platform Tests (WPT), once I get familiar with how that process is structured.
  • Articles on topics that will include (but are not limited to!) gaps in CSS, logical properties, and styling based on writing direction.  I haven’t actually settled on outlets for those yet, so if you’d be interested in publishing any of them, hit me up.  I usually aim for about a thousand words, including example markup and CSS.
  • Very likely will rejoin the CSS Working Group after a (mumblecough)-year absence.
  • Assembling a Raspberry Pi system to test out WPEWebKit in its native, embedded environment and get a handle on how to create a “setting up WPEWebKit for total embedded-device noobs”, of which I am one.

That last one will be an entirely new area for me, as I’ve never really worked with an embedded-device browser before.  WPEWebKit is a WebKit port, actually the official WebKit port for embedded devices, and as such is aggressively tuned for performance and low resource demand.  I’m really looking forward to not only seeing what it’s like to use it, but also how I might be able to leverage it into some interesting projects.

WPEWebKit is one of the reasons why Igalia is such a big contributor to WebKit, helping drive its standards support forward and raise its interoperability with other browser engines.  There’s a thread of self-interest there: a better WebKit means a better WPEWebKit, which means more capable embedded devices for Igalia’s clients.  But after a month on the inside, I feel comfortable saying most of Igalia’s commitment to interoperability is philosophical in nature — they truly believe that more consistency and capability in web browsers benefits everyone.  As in, THIS IS FOR EVERYONE.

And to go along with that, more knowledge and awareness is seen as an unvarnished good, which is why they’re having me working on MDN content.  To that end, I’m putting out an invitation here and now: if you come across a page on MDN about CSS or HTML that confuses you, or seems inaccurate, or just doesn’t have much information at all, please get in touch to let me know, particularly if you are not a native English speaker.

I can’t offer translation services, unfortunately, but I can do my best to make the English content of MDN as clear as possible.  Sometimes, what makes sense to a native English speaker is obscure or unclear to others.  So while this offer is open to everyone, don’t hold back if you’re struggling to parse the English.  It’s more likely the English is unclear and imprecise, and I’d like to erase that barrier if I can.

The best way to submit a report is to send me email with [MDN] and the URL of the page you’re writing about in the subject line.  If you’re writing about a collection of pages, put the URLs into the email body rather than the subject line, but please keep the [MDN] in the subject so I can track it more easily.  You can also ping me on Twitter, though I’ll probably ask you to email me so I don’t lose track of the report.  Just FYI.

I feel like there was more, but this is getting long enough and anyway, it already seems like a lot.  I can’t wait to share more with you in the coming months!


First Week at Igalia

Published 3 years, 10 months past

The first week on the job at Igalia was… it was good, y’all.  Upon formally joining the Support Team, got myself oriented, built a series of tests-slash-demos that will be making their way into some forthcoming posts and videos, and forked a copy of the Mozilla Developer Network (MDN) so I can start making edits and pushing them to the public site.  In fact, the first of those edits landed Sunday night!  And there was the usual setting up accounts and figuring out internal processes and all that stuff.

A series of tests of the CSS logical property ';block-border'.
Illustrating the uses of border-block.

To be perfectly honest, a lot of my first-week momentum was provided by the rest of the Support Team, and setting expectations during the interview process.  You see, at one point in the past I had a position like this, and I had problems meeting expectations.  This was partly due to my inexperience working in that sort of setting, but also partly due to a lack of clear communication about expectations.  Which I know because I thought I was doing well in meeting them, and then was told otherwise in evaluations.

So when I was first talking with the folks at Igalia, I shared that experience.  Even though I knew Igalia has a different approach to management and evaluation, I told them repeatedly, “If I take this job, I want you to point me in a direction.”  They’ve done exactly that, and it’s been great.  Special thanks to Brian Kardell in this regard.

I’m already looking forward to what we’re going to do with the demos I built and am still refining, and to making more MDN edits, including some upgrades to code examples.  And I’ll have more to say about MDN editing soon.  Stay tuned!


First Day at Igalia

Published 3 years, 10 months past

Today is my first day as a full-time employee at Igalia, where I’ll be doing a whole lot of things I love to do: document and explain web standards at MDN and other places, participate in standards work at the W3C, take on some webmaster duties, and play a part in planning Igalia’s strategy with respect to advancing the web.  And likely other things!

I’ll be honest, this is a pretty big change for me.  I haven’t worked for anyone other than myself since 2003.  But the last time I did work for someone else, it was for Netscape (slash AOL slash Time Warner) as a Standards Evangelist, a role I very much enjoyed.  In many ways, I’m taking that role back up at Igalia, in a company whose values and structure are much more in line with my own.  I’m really looking forward to finding out what we can do together.

If the name Igalia doesn’t ring any bells, don’t worry: nobody outside the field has heard of them, and most people inside the field haven’t either.  So, remember when CSS Grid came to browsers back in 2017?  Igalia did the implementation that landed in Safari and Chromium.  They’ve done a lot of other things besides that — some of which I’ll be helping to spread the word about — but it’s the thing that web folks will be most likely to recognize.

This being my first day and all, I’m still deep in the setting up of logins and filling out of forms and general orienting of oneself to a new team and set of opportunities to make a positive difference, so there isn’t much more to say besides I’m stoked and planning to say more a little further down the road.  For now, onward!


Highlighting Accessible Twitter Content

Published 3 years, 11 months past

For my 2020 holiday break, I decided to get more serious about supporting the use of alternative text on Twitter.  I try to be rigorous about adding descriptive text to my images, GIFs, and videos, but I want to be more conscientious about not spreading inaccessible content through my retweets.

The thing is, Twitter doesn’t make it obvious whether someone else’s content has been described, and the way it structures (if I can reasonably use that word) its content makes it annoyingly difficult to conduct element or accessibility-property inspections.  So, in keeping with the design principles that underlie both the Web and CSS, I decided to take matters into my own hands.  Which is to say, I wrote a user stylesheet.

I started out by calling out things that lacked useful alt text.  It went something like this:

div[aria-label="Image"]::before {
	content: "WARNING: no useful ALT text";
	background: yellow;
	border: 0.5em solid red;
	border-radius: 1em;
	box-shadow: 0 0 0.5em black;
	…

…and so on, layering on some sizing, font stuff, and positioning to hopefully place the text where it would be visible.  This failed to satisfy for two reasons:

  1. Because of the way Twitter nests it dozens of repeatedly utility-classes divs, and the styles repeatedly applied thereby, many images were tall (but cut off) or wide (ditto) in ways that pulled the positioned generated text out of the visible frame shown on the site.  There wasn’t an easily-found human-readable predictable way to address the element I wanted to use as a positioning context.  So that was a problem.
  2. Almost every image in my feed had a big red and yellow WARNING on it, which quickly depressed me.

What I realized was that rather than calling out the failures, I needed to highlight the successes.  So I commented out the Big Red Angry Text approach above and got a lot more simple.

div[aria-label="Image"] {
	filter: grayscale(1) contrast(0.5);
}
div[aria-label="Image"]:hover {
	filter: none;
}

A screenshot of the author’s Twitter timeline, showing three tweets.  The middle tweet is text only.  The top tweet has a blurred-out image which is grayscale and of reduced contrast, indicating it has no useful alternative text.  The bottom tweet is also blurred, but it full color and contrast, indicating it has been given useful alternative text by the person who posted it.
Three consecutive tweets from my timeline on Friday, January 1st, 2021.  The blurring of the images in the top and bottom tweets is an effect of the Data Saver preference, not my CSS.

Just that.  De-emphasize the images that have the default alt text by way of their enclosing divs, and remove that effect on hover so I can see the image as intended if I so choose.  I like this approach better because it de-emphasizes images that aren’t properly described, while those which are described get a visual pop.  They stand out as lush islands in a flat sea.

In case you’ve been wondering why I’m selecting divs instead of img and video elements, it’s because I use the Data Saver setting on Twitter, which requires me to click on an image or video to load it.  (You can set it via Settings > Accessibility, display and languages > Data usage > Data saver.  It’s also what’s blurring the images in the screenshot shown here.)  I enable this setting to reduce network load, but also to give me an extra layer of protection when disturbing images and videos circulate.  I generally follow people who are careful about not sharing disturbing content, but I sometimes go wandering outside my main timeline, and never know what I’ll find out there.

After some source digging, I discovered a decent way to select non-described videos, which I combined with the existing image styles:

div[aria-label="Image"],
div[aria-label="Embedded video"] {
	filter: grayscale(1) contrast(0.5);
}
div[aria-label="Image"]:hover,
div[aria-label="Embedded video"]:hover {
	filter: none;
}

The fun part is, Twitter’s architecture spits out nested divs with that same ARIA label for videos, which I imagine could be annoying to people using screen readers.  Besides that, it also has the effect of applying the filter twice, which means videos that haven’t been described get their contrast double-reduced!  And their grayscale double-enforced!  Fun.

What I didn’t expect was that when I start playing a video, it loses the grayscale and contrast reduction effects even when not being hovered, which makes the second rule above a little over-written.  I don’t see the DOM structure changing a whole lot when the video loads and plays, so either videos are being treated differently for filter purposes, or I’m missing something in the DOM that’s invalidating the selector matching.  I might poke at it over time to find a fix, or I may just let it go.  The user experience isn’t too far off what I wanted anyway.

There is a gap in my coverage, which is GIFs pulled from Twitter’s GIF pool.  These have default alt text other than Image, which makes selecting for them next to impossible.  Just a few examples pulled from Firefox’s Accessibility panel when I searched the GIF panel for “this is a test”:

testing GIF
This Is ATest Fool GIF
Corona Test GIF by euronews
Test Fail GIF
Corona Virus GIF by guardian
Its ATest Josh Subdquist GIF
Corona Stay Home GIF by INTO ACTION
Is This A Test GIF
Stressed Out Community GIF
A1b2c3 GIF

I assume these are Giphy titles or something like that.  In nearly every case, they’re insufficient, if not misleading or outright useless.  I looked for markers in the DOM to be able to catch these, but didn’t find anything that was obviously useful.

I did think briefly about filtering for any aria-label that contains the string GIF ([aria-label*="GIF"]), but that would improperly catch images and videos that have been described but happen to have the string GIF inside them somewhere.  This might be a relatively rare occurrence, but I’m loth to gray out media that someone went to the effort of describing.  I may change my mind about this, but for now, I’m accepting that GIFs which appear in full color are probably not described, particularly when containing common memes, and will try to be careful.

I apply the above styles in Firefox using Stylus, which also available for Chrome, and they’re working pretty well for me.  I wish I could figure out a way to apply them in mobile contexts, but that’s a (much bigger) problem for another day.

I’m not the first to tread this ground, nor do I expect to be the last, sadly.  For a deeper dive into all the details of Twitter accessibility and the pitfalls that can occur, please read Adrian Roselli’s excellent article Improving Your Tweet Accessibility from just over two years ago.  And if you want apply accessibility-aid CSS to your own Twitter experience but can’t or won’t use Stylus, Adrian has a bookmarklet that injects Twitter alt text all set up and ready to go — you can use it as-is, or replace the CSS in his bookmarklet with mine above or your own if you want to take a different approach.

So that’s how I’m upping my awareness of accessible content on Twitter in 2021.  I’d love to hear what y’all are using to improve your own experiences, or links to tools and resources on this same topic.  If you have any of that, please drop the links in a comment below, so that everyone who reads this can benefit.  Thanks!


Polite Bash Commands

Published 4 years, 2 months past

For years, I’ve had a bash alias that re-runs the previous command via sudo.  This is useful in situations where I try to do a thing that requires root access, and I’m not root (because I am never root).  Rather than have to retype the whole thing with a sudo on the front, I just type please and it does that for me.  It looked like this in my .bashrc file:

alias please='sudo "$BASH" -c "$(history -p !!)"'

But then, the other day, I saw Kat Maddox’s tweet about how she aliases please straight to sudo, so to do things as root, she types please apt update, which is equivalent to sudo apt update.  Which is pretty great, and I want to do that!  Only, I already have that word aliased.

What to do?  A bash function!  After commenting out my old alias, here’s what I added to .bash_profile:

please() {
	if [ "$1" ]; then
		sudo $@
	else
		sudo "$BASH" -c "$(history -p !!)"
	fi
}

That way, if I remember to type please apachectl restart, as in Kat’s setup, it will ask for the root password and then execute the command as root; if I forget my manners and simply type apachectl restart, then when I’m told I don’t have privileges to do that, I just type please and the old behavior happens.  Best of both worlds!


Reply links in RSS items

Published 4 years, 3 months past

Inspired by Jonnie Hallman, I’ve added a couple of links to the bottom of RSS items here on meyerweb: a link to the commenting form on the post, and a mailto: link to send me an email reply.  I prefer that people comment, so that other readers can gain from the reply’s perspective, but not all comments are meant to be public.  Thus, the direct-mail option.

As Jonnie says, it would be ideal if all RSS readers just used the value of the author element (assuming it’s an email address) to provide an email action; if they did, I’d almost certainly add mine to my posts.  Absent that, adding a couple of links to the bottom of RSS items is a decent alternative.

Since the blog portion of meyerweb (and therefore its RSS) are powered by WordPress, I added these links programmatically via the functions.php file in the site’s theme.  It took me a bit to work out how to do that, so here it is in slightly simplified form (update: there’s an even more simplified and efficient version later in the post):

function add_contact_links ( $text ) {
   if (is_feed()) {
      $text .= '
      <hr><p><a href="' . get_permalink($post) . '#commentform">Add a comment to the post, or <a href="mailto:admin@example.com?subject=In%20reply%20to%20%22' . str_replace(' ', '%20', get_the_title()) . '%22">email a reply</a>.</p>';
   }
   return $text;
}
add_filter('the_content', 'add_contact_links');

If there’s a more efficient way to do that in WordPress, please leave a comment to tell the world, or email me if you just want me to know.  Though I’ll warn you, a truly better solution will likely get blogged here, so if you want credit, say so in the email.  Or just leave a comment!  You can even use Markdown to format your code snippets.


Update 2020-09-10: David Lynch shared a more efficient way to do this, using the WordPress hook the_feed_content instead of plain old the_content.  That removes the need for the is_feed() check, so the above becomes the more compact:

function add_contact_links ( $text ) {
   $text .= '
   <hr><p><a href="' . get_permalink($post) . '#commentform">Add a comment to the post, or <a href="mailto:admin@example.com?subject=In%20reply%20to%20%22' . str_replace(' ', '%20', get_the_title()) . '%22">email a reply</a>.</p>';
   return $text;
}
add_filter('the_content_feed', 'add_contact_links');

Thanks, David!


Browse the Archive

Earlier Entries

Later Entries