Posts in the Tech Category

Collective Editorial: the Plugin

Published 11 years, 6 months past

As I was reading an article with a few scattered apostrophe errors, I wished that I could highlight each one, hit a report button, and know that the author had been notified of the errors so that they could fix them.  No requirement to leave a comment chastising them for bad grammar, replete with lots of textual context so they could find the errors — just a quick “hey, I spotted this error, now you know so you can fix it” notice, sent in private to them.

Then I realized that I wanted that for my own site, to let people tell me when I had gaffes in need of repair.  It’s an almost-wiki, where the crowd can flag errors that need to be corrected without having to edit the source themselves — or have the power to edit it themselves, for that matter, which is an open door for abuse.

I haven’t thought this through in tons of detail, but here’s how it feels in my head:

  • Visitors highlight a typo and click a button to report it.  Or else click a button to start reporting, highlight a word, and click again to submit.  This part is kind of fuzzy in my head, and yes, “click” is not the best term here, but it’s one we all understand.
  • Interesting extra feature: the ability to classify the type of error when reporting.  For example: apostrophe, misspelling, parallelism, pronoun trouble.
  • Other interesting extra feature: the ability to inform users of the ground rules before they report.  For example: “This site uses British punctuation rules, the Oxford comma, and American spelling.”  (Which I do.)
  • The author gets notice whenever an error is reported, or else can opt for a daily digest.
  • Each notice lets the author quickly accept or reject the reported error, much as can be done with edits in MS Word and similar programs, along with a link that will jump the author straight to the reported error so they can see it in context.  If rejected, future reports of that word are disabled.  If accepted, the change is made immediately, without requiring a dive into the CMS.
  • When an error is reported, future visitors to the site will see any already-reported errors in highlight.  This keeps them from reporting the same thing over and over, and also acts as incentive to the author to fix errors quickly.  (The highlight style could be customizable.)
  • Reports can only happen at the word level, not the individual letter level.  So reporting an “it’s” error highlights all of “it’s”, not just the offending apostrophe.  Perhaps also for multiple words, though only up to a certain number, like three.  And yes, I’m keenly aware of the challenges of defining a “word” in an internationally-aware manner, but perhaps in ideographic languages you switch to per-symbol.  (Not an expert here, so take that with a few grinders of salt.)
  • The author can optionally limit the number of reports permitted per hour/day/whatever.  This could be enforced globally or on a per-user basis, though globally is a tad more robust.

That’s how I see it working, after a few minutes’ thought.  It seems pretty achievable as a CMS plugin, actually, though I confess that I don’t have anywhere close to the time and coding chops needed to make it happen right now (or any time soon).  The biggest challenge to me seems like the “edit-on-accept-without-CMS-diving” part, since there are so many CMSes and particularly since static sites are staging a comeback.  Still, I think it would be a fun and worthwhile project for someone out there.  If somebody takes it on, I’d love to follow along and see where it ends up, particularly if they do it for WordPress (which is what the blog hereabouts runs on).


Resurrected Landmarks

Published 11 years, 6 months past

It was just last week, at the end of April, that CERN announced the rebirth of The Very First URL, in all its responsive and completely presentable glory.  If you hit the root level of the server, you get some wonderful information about the Web’s infancy and the extraordinary thing CERN did in releasing it, unencumbered by patent or licensing restrictions, into the world, twenty years ago.

That’s not at all minor point.  I don’t believe it overstates the case to say that if CERN hadn’t made the web free and open to all, it wouldn’t have taken over the net.  Like previous attempts at hypertext and similar information systems, it would have languished in a niche and eventually withered away.  There were other things that had to happen for the web to really take off, but none of them would have mattered without this one simple, foundational decision.

I would go even further and argue that this act infused the web, defining the culture that was built on top of it.  Because the medium was free and open, as was often the case in academic and hacker circles before it, the aesthetic of sharing freely became central to the web community.  The dynamic of using ideas and resources freely shared by others, and then freely sharing your own resources and ideas in return, was strongly encouraged by the open nature of the web.  It was an implicit encouragement, but no less strong for that.  As always, the environment shapes those who live within it.

It was in that very spirit that Dave Shea launched the CSS Zen Garden ten years ago this week.  After letting it lie fallow for the last few years, Dave has re-opened the site to submissions that make use of all the modern capabilities we have now.

It might be hard to understand this now, but the Zen Garden is one of the defining moments in the history of web design, and is truly critical to understanding the state of CSS before and after it debuted.  When histories of web design are written — and there will be — there will be a chapters titled things like “Wired, ESPN, and the Zen Garden: Why CSS Ended Up In Everything”.

Before the Zen Garden, CSS was a thing you used to color text and set fonts, and maybe for a simple design, not for “serious” layout.  CSS design is boxy and boring, and impossible to use for anything interesting, went the conventional wisdom.  (The Wired and ESPN designs were held to be special cases.)  Then Dave opened the gates on the Zen Garden, with its five utterly different designs based on the very same document…and the world turned.

I’m known to be a history buff, and these days a web history buff, so of course I’m super-excited to see both these sites online and actively looked after, but you should be too.  You can see where it all started, and where a major shift in design occurred, right from the comfort of your cutting-edge nightly build of the latest and greatest browsers known to man.  That’s a rare privilege, and a testimony to what CERN set free, two decades back.


Blink Support(s)

Published 11 years, 7 months past

Just a quick followup to last month’s post about @supports:

@supports (text-decoration: blink) {
	#test {
		color: green;
		background: yellow;
		text-decoration: blink;
	}
}

Results in all @supports-supporting browsers I was able to test: green text on a yellow background, except Firefox 22, which additionally blinks the text.  The latest nightly builds of Firefox 23 do not blink the text, thanks to bug 857820.

Discuss.


Unsupportable Promises

Published 11 years, 8 months past

Over the past year and a half, the CSS Working Group has been working on a CSS Conditional Rules Module Level 3 module.  Now, don’t get overexcited: this is not a proposal to add generalized, formal if/then/else or switch statements to CSS — though in a very limited way, it does just that.  This is the home of the @media rule, which lets you create if/then conditions with regard to the media environment.  It’s also the home of the @supports rule, which lets you…well, that’s actually more complicated than you might think.

I mean, what do you think @supports means?  Take a moment to formulate a one-line definition of your understanding of what it does, before moving on to the rest of this piece.

If you’ve never heard of it before and wonder how it works, here’s a very basic example:

body {background-color: white;}
@supports (background-color: cornflowerblue) {
	body {background-color: cornflowerblue;}
}

The idea is that if the browser supports that property:value combination, then it will apply the rule or rules found inside the curly brackets.  In this sense, it’s just like @media rules: if the conditions in the parentheses are deemed to apply, then the rules inside the declaration block are used.  The module refers to this ability as “feature queries”.

There are some logical combination keywords available: and, or, and not.  So you can say things like:

body {color: #222; background-color: white;}
@supports ((background-color: cornflowerblue) and (color: rgba(0,0,0,0.5))) {
	body {background-color: cornflowerblue; color: rgba(0,0,0,0.5);}
}

Okay, but what does that actually mean?  Here’s what the specification says:

A CSS processor is considered to support a declaration (consisting of a property and value) if it accepts that declaration (rather than discarding it as a parse error). If a processor does not implement, with a usable level of support, the value given, then it must not accept the declaration or claim support for it.

So in that first sentence, what we’re told is that “support” means “accepts [a] declaration” and doesn’t drop it on the floor as something it doesn’t recognize.  In other words, if a browser parses a property:value pair, then that qualifies as “support” for said pair.  Note that this sentence says nothing about what happens after parsing.  According to this, a browser could have a completely botched, partial, and generally unusable implementation of the property:value pair, but the act of recognizing means that there’s “support”.

But wait!  That second sentence adds an additional constraint, after a fashion: there must be “a usable level of support”, the lack of which means that a browser “must not…claim support”.  So not only must a browser parse a property:value pair, but support it to “a usable level”.

But what constitutes a “usable level”?  According to everyone who’s told me that I was wrong about vendor prefixes, any browser implementation of a feature should be complete and error-free.  Is that what’s required to be regarded as a usable level?  How about if the implementation has one known bug?  Three?  Ten?  Can any of them be severe bugs?  What about merely serious bugs?  What if two browsers claim usable support, and yet are not interoperable?

So.  How does the definition of @supports match the one-line definition I asked you to formulate, back at the beginning?  Are they exactly the same, or is there a difference?

I suspect that most people, especially those coming across @supports for the first time, will assume that the word means that a browser has complete, error-free support.  That’s the implicit promise.  Very few people think of “supports” as a synonym for “recognizes” (let alone “parses”).  There’s a difference, sometimes a very large one, between recognizing a thing and supporting it.  I’m sure that browser teams will do their best to avoid situations where a property:value pair is parsed but not well supported, but it’s only a matter of time before a “supported” pair proves to be badly flawed, or retroactively made wrong by specification changes.  Assuming that such things will be allowed, in an environment where @supports exists.

If feature queries were set with @feature, as media queries are set using @media, or even if the name were something along the lines of @parses or @recognizes, I’d be a lot less bothered.  The implicit promise would be quite a bit different.  What I feel like we face here is the exact inversion of vendor prefixes: instead of a marker for possible instability and a warning that preserves the possibility of changing the specification when needed, this pretends to promise stability and safety while restricting the WG’s ability to make changes, however necessary.  My instinct is that @supports will end up in the same place: abused, broken, and eventually reviled — except this time, there will be the extra bitterness of authors feeling that they were betrayed.


How Twitter Got Its Line Breaks

Published 11 years, 8 months past

In the past day or so, Twitter started “supporting line breaks”.  This is something lots of third-party clients had been doing for a while, and heck, even Facebook does it.  In fact, if you had a tweet with linebreaks get auto-posted to Facebook by the Twitter-to-Facebook tool that Twitter provides, the linebreaks would show up there even though they didn’t on Twitter itself.  Until recently.

So how did they do it?  With CSS.  Here it is:

.tweet-display-linebreaks .tweet .js-tweet-text{
	white-space:pre-wrap
}

That’s it.  I’m not going to comment on their selector construction, except in the meta sense that I just did, but that single rule is all it took.

Well, not quite all.  The other thing they’ve done is to trim off any leading or trailing whitespace, and make sure the tweet’s content is right up against the opening and closing tags of the element.  It looks like so:

<p class="js-tweet-text">Y’all ready for this?</p>

When I input that tweet, it was like this, extra linefeeds and all:




Y’all ready for this?



So why do they trim off the edges?  Because if they left any whitespace between the tags and the content, pre-wrap would honor it.  This would happen even if Twitter, and not the author, was the source of the linefeeds between tags and content.  So rather than just ensure the content was placed normally, without any extra space, they went the trim($tweet) route.  I’m sure there are ways to beat the trimming; I haven’t tried to find them.  And there may be perfectly good reasons why they went the trim() route.  Maybe someone from Twitter will drop by to fill us in.

I will also note that white-space: pre-wrap preserves spaces between characters, just like pre elements do.  That means that anyone who double-spaces after sentences will have that space show up in their tweets, for everyone else to see.  Just like with the line breaks, author intent is thus preserved.  Deal with it.


Helvetial

Published 11 years, 8 months past

Maybe all the cool kids already know this, but I didn’t, so I’ll document it for the rest of us:  in Windows, Helvetica is not Helvetica: it’s Arial.  It’s Arial even if you explicitly ask for Helvetica and fall back to a non-sans-serif font family and allow for no other choices — but it’s not Helvetica if you try to get to it indirectly.

To see what I mean, you can load up my testcase in any Windows browser — IE, Firefox, Chrome, whatever — assuming that you haven’t installed Helvetica on your Windows machine.  (If you have, then I’d love to know what results you get.)  Given that you haven’t installed Helvetica, you should see that three of the four bottom-bordered spans are using Arial.  This can be determined due to the shapes of the “GR” characters — they are notably different between Helvetica and Arial.  Here’s what I apply to the first test list item:

#l01 .s01 {font-family: Helvetica, monospace;}
#l01 .s02 {font-family: Arial, monospace;}

My result is that they use exactly the same face, and that face is Arial, which should not have happened.  If Helvetica is not present, the first span should be rendered using a monospace font face.  If it is present, then the first span should have different letterforms than the second.

But it’s the second line where things get really interesting.  There, I assigned local copies of Helvetica and Arial (if they exist) to the invented family names “H” and “A”.  Then I apply this to the second test list item:

#l02 .s01 {font-family: H, monospace;}
#l02 .s02 {font-family: A, monospace;}

The result should be the same as the first line, but it isn’t: the first span gets a fallback font face, and the second span gets Arial.  So while the system redirects requests for Helvetica to Arial, it doesn’t do so in such a way that the invented family name “H” resolves to Arial, even though it was assigned Helvetica (or perhaps I should say “Helvetica”) as its source.

I’d be interested to know if there’s something I’ve overlooked or misunderstood here, because these waters are deep and I suspect my understanding of them is somewhat shallow.


Glasshouse

Published 11 years, 8 months past

Our youngest tends to wake up fairly early in the morning, at least as compared to his sisters, and since I need less sleep than Kat I’m usually the one who gets up with him.  This morning, he put away a box he’d just emptied of toys and I told him, “Well done!”  He turned to me, stuck his hand up in the air, and said with glee, “Hive!”

I gave him the requested high-five, of course, and then another for being proactive.  It was the first time he’d ever asked for one.  He could not have looked more pleased with himself.

And I suddenly realized that I wanted to be able to say to my glasses, “Okay, dump the last 30 seconds of livestream to permanent storage.”

There have been concerns raised about the impending crowdsourced panopticon that Google Glass represents.  I share those concerns, though I also wonder if the pairing of constant individual surveillance with cloud-based storage mediated through wearable CPUs will prove out an old if slightly recapitalized adage: that an ARMed society is a polite society.  Will it?  We’ll see — pun unintentional but unavoidable, very much like the future itself.

And yet.  You think that you’ll remember all those precious milestones, that there is no way on Earth you could ever forget your child’s first word, or the first time they took their first steps, or the time they suddenly put on an impromptu comedy show that had you on the floor laughing.  But you do forget.  Time piles up and you forget most of everything that ever happened to you.  A few shining moments stay preserved, and the rest fade into the indistinct fog of your former existence.

I’m not going to hold up my iPhone or Android or any other piece of hardware all the time, hoping that I’ll manage to catch a few moments to save.  That solution doesn’t scale at all, but I still want to save those moments.  If my glasses (or some other device) were always capturing a video buffer that could be dumped to permanent storage at any time, I could capture all of those truly important things.  I could go back and see that word, that step, that comedy show.  I would do that.  I wanted to do it, sitting on the floor of my child’s room this morning.

That was when I realized that Glass is inevitable.  We’re going to observe each other because we want to preserve our own lives — not every last second, but the parts that really matter to us.  There will be a whole host of side effects, some of which we can predict but most of which will surprise us.  I just don’t believe that we can avoid it.  Even if Google fails with Glass, someone else will succeed with a very similar project, and sooner than we expect.  I’ve started thinking about how to cope with that outcome.  Have you?


The Stinger

Published 11 years, 8 months past

(In television, the “stinger” is the clip that plays during or just after the closing credits of a show.)

On Friday, the Web Standards Project announced its own dissolution.  I felt a lot of things upon reading the announcement, once I got over my initial surprise: nostalgia, wistfulness, closure.  And over it all, a deep sense of respect for the Project as a whole, from its inception to its peak to its final act.

In some ways, the announcement was a simple formalization of a longstanding state of affairs, as the Project has gradually grown quieter and quieter over the years, and its initiatives had been passed on to other, more active homes.  It was still impressive to see the group explicitly shut down.  I can’t think of the last time I saw a group that had been so influential and effective recognize that it was time to turn off the lights, and exit with dignity.  As they wrote:

Thanks to the hard work of countless WaSP members and supporters (like you), Tim Berners-Lee’s vision of the web as an open, accessible, and universal community is largely the reality. While there is still work to be done, the sting of the WaSP is no longer necessary. And so it is time for us to close down The Web Standards Project.

I have a long history with the WaSP.  Way, way back, deep in the thick of the browser wars, I was invited to be a member of the CSS Action Committee, better known as the CSS Samurai.  We spent the next couple of years documenting how things worked (or, more often, didn’t) in CSS implementations, and — and this was the clever bit, if you ask me — writing up specific plans of action for browsers.  The standards compliance reviews we published told browsers what they needed to fix first, not just what they were getting wrong.  I can’t claim that our every word was agreed with, let alone acted upon, but I’m pretty confident those reviews helped push browser teams in the right direction.  Or, more likely, helped browser teams push their bosses in the direction the teams already wanted to go.

Succumbing to a wave of nostalgia, I spent a few minutes trawling my archives.  I still have what I think is all the mail from the Samurai’s mailing list, run through Project Cool’s servers, from when it was set up in August 1998 up through June of 2000.  My archive totals 1,716 messages from the group, as well as some of the Steering Committee members (mostly Glenn Davis, though George Olsen was our primary contact during the Microsoft style sheets patent brouhaha of February 1999).  If I’m not reading too much into plain text messages over a decade old, we had a pretty great time.  And then, after a while, we were done.  Unlike the WaSP itself, we never really declared an end.  We didn’t even march off into the sunset having declared that the farmers always win.  We just faded away.

Not that that’s entirely a bad thing.  At a certain point, our work was done, and we moved on.  Still, I look back now and wish we’d made it a little more formal.  Had we done so, we might have said something like the WaSP did:

The job’s not over, but instead of being the work of a small activist group, it’s a job for tens of thousands of developers who care about ensuring that the web remains a free, open, interoperable, and accessible competitor to native apps and closed eco-systems. It’s your job now…

And so it is.  These last years have shown that the job is in very good hands.

“Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.” said Margaret Mead.  I see now that the way those small groups truly change the world is by convincing the rest of the world that they are right, thus co-opting the world to their cause.  Done properly, the change makes the group obsolete.  It’s a lesson worth remembering, as we look at the world today.

I’m honored to have been a part of the WaSP, and I offer my deepest samurai bow of respect to its founders, its members, and its leaders.  Thank you all for making the web today what it is.


Browse the Archive

Earlier Entries

Later Entries