Thoughts From Eric Archive

Technorati Redesigns

Published 20 years, 2 weeks past

It’s the time for redesigns, I guess—CNN did it over the weekend, and now Technorati has taken its beta design final.  I’m proud to say I had a part in making Technorati’s new look possible.  The graphic design was done by Derek Powazek, and from his graphic comp files I produced the XHTML and CSS.  Then I had to run the Tantek gauntlet; the job wasn’t done until he approved of the code I’d produced.

If you dig under the hood of the new design, you’ll probably find things you’d have done differently.  I’m not going to go into a detailed post-mortem here, but suffice to say that every choice was made within the project’s defined constraints.  So when you see, for example, a bunch of b elements used to create the corners, that approach was the best choice for the project: it best satisfied the concerns and demands of the various people involved.

This is not to say that my choices were the best for other projects with similar design demands but different technical demands.  They aren’t.  At a certain level, there are no canonically right answers.  There may be a whole spectrum of related solutions, where one variation is better for this project and another for that one.  And people like me, despite all their experience and knowledge, don’t always hit the right answer on the first try.  My initial approach to the corners is not what you see in the final markup.

That said, I am pleased with how I combined positioning and sprite-like styling to get the corners to work.  I know each technique has been done before, but I’m not aware of previous combinations of the two.  So that’s definitely a point of pride.  I hope to find time to document the details of this particular corner solution, along with variant approaches.

I’d like to thank Derek and the rest of the Technorati team for letting me be a part of the redesign project, and for giving me a chance to flex my creative and technical muscles.


Increasing the Strength of Ajax

Published 20 years, 2 weeks past

There’s been some comment recently about how Ajax programming requires a different approach to UI and user notification.  What Jeff Veen wrote about Designing for the Subtlety of Ajax, and Alex Bosworth‘s post on the top 10 Ajax Mistakes, are just two examples.

I pretty much agree with both pieces.  I’ve missed upates more than once on Ajax pages, just because I’m too used to how pages usually work.  I’ll click on something and then my attention will, out of habit, instantly go elsewhere—another window, another application, another computer, whatever—and keep subconscious track of what was happening in the window where I clicked, monitoring it in my peripheral vision for the flicker of a page reload.  Eventually there will be a little tickle in the back of my brain that says, “Hey, didn’t that site ever do anything?”  When I finally look straight at it, I realize that it did something quite a while ago, probably a split second after my mental focus moved away.  Instead of being efficient, I was wasting time waiting for a refresh that never came.

One might think it’s time for an “Ajax enabled” badge on pages so we know “better pay attention, ’cause this ain’t your father’s Web page”.  I don’t think that’s the way to go, however.  I think what’s needed is a more mature HCI design sense.  Web design has long relied on the page-update refresh to tell the user something has happened; this was such a part of the Web’s fabric that designing around it was almost unconscious.  There hasn’t been a need for sophisticated HCI considerations… until now.

In other words, Web design is going to need to grow up, and become more HCI-oriented than it has been.  The usability of a Web site will become as much about how you let the user know they’ve done something as it is about getting them to the thing they want to do.  In addition to getting the page to look inviting and present the information well, it will be necessary to obsess over the small details, implement highlights and animations and pointers—not to wow the user, but to help them.

In this endeavor, it’s worth remembering that there is a very large and long-standing body of research on HCI.  For years, many HCI experts have complained that the Web design field is making all sorts of errors that could be avoided if we’d just pay attention to what they’re telling us—a criticism which was not totally inaccurate.  Some Web design experts shot back that the Web was a different medium than the sorts of things HCI people studied, and anyway, the Web was not an application—and that rejoinder was also not totally inaccurate.  But with Ajax, the Web-application dichotomy is disappearing.  The retort is becoming less accurate, and the criticism more accurate.

I don’t claim to know what should be done.  The simplest update notification would be to set the visibility of the body element to hidden for half a second, and then back to visible, thus visually simulating a page refresh.  Crude, but it would play directly to users’ expectations.  The fading yellow highlight in Basecamp gets a lot of attention (and imitators), and that’s a good way to go too.  We could envision tossing a red outline onto something that changed, or animating a target-reticle effect on the updated content, or any number of other ideas.  Again I say: the decades of work done in HCI research are a resource we should not ignore.

From my perspective, there are at least two good things in the Ajax world.  First is that the need for understanding and using CSS, XHTML, and the DOM has never been greater.  Okay, it’s a slightly selfish thing, but it leads directly into the second good thing: that the need for standards support has never been more critical.  If a browser wants to play in the Ajax space—if it wants to be a serious platform for delivering applications—then it’s going to have to get along with the others.  It’s going to have to support the standards.


CNN Redesigns

Published 20 years, 2 weeks past

Everybody’s favorite fringe news organization, CNN, has updated the design of their Web site.  Unlike the last three changes of design, I actually like this one out of the gate.  Yes, I always got used to the old designs and quickly at that, but at first I disliked them.  This time I’m impressed.  It’s a little bit sparse, but the restrained use of whitespace is a refreshing change from many news sites (*cough*Fox News*cough*).

In part, this may be because the design isn’t a redesign so much as a tasteful makeover of the old design.  By that, I mean that everything’s basically in the same place as before, just with a more serious look.  However, it’s the addition of extra functionality that really appeals to me.  For example, most section boxes now have the title followed by unobtrusive links to the main section page, video or other media, and then partner links.  These links add a lot without upsetting the apple cart, as it were.

I also note with a good deal of interest that CNN’s video clips are now free; previously, you had to pay money to see their video.  What forces led them to drop the subscription fee, I wonder?  I can think of some likely candidates, but it would be interesting to hear from CNN why they did it.

Of course, they’re only free if you have the Windows Media Player 9 plugin installed.  Otherwise, they’re simply unavailable.  Gah!


Liberal vs. Conservative

Published 20 years, 3 weeks past

So it turns out that crackers can mess up your Web site with nothing more than a malformed HTTP packet.  You might think something as simple as HTTP would be basically risk-free, but no, I’m afraid not.  All it takes is interaction between programs that handle HTTP data slightly differently, and hey presto, you’ve got a security hole.

Ben Laurie weighed in on this:

“It is interesting that being liberal in what you accept is the base cause of this misbehaviour,” Laurie says. “Perhaps it is time the idea was revisited.”

That’s a reference to the late Jon Postel‘s dictum (from RFC 793) of “be conservative in what you do, be liberal in what you accept from others”.  This is done in the name of robustness: if you’re liberal in what you accept, you can recover from data corruption caused by unanticipated problems.

Laurie’s right.  The problem is that being liberal in what you accept inevitably leads to a systemic corruption.  Look at the display layer of the Web.  For years, browsers have been liberal in what markup they accept.  What did it get us?  Tag soup.  The minute browsers allowed authors to be lazy, authors were lazy.  The tools written to help authors encoded that laziness.  Browsers had to make sure they could deal with even more laziness, and the tools kept up.  Just to get CSS out of that death spiral, we (as a field) had to invent, implement, and explain DOCTYPE switching.

In XML, it’s defined that a user agent must throw an error on malformed markup and stop.  No error recovery attempts, just a big old “this is broken” message.  Gecko already does this, if you get it into full-on XML mode.  It won’t do it on HTML and XHTML served as text/html, though, because too many Web pages would just break.  If you serve up XHTML as application/xml+xhtml, and it’s malformed, you’ll be treated to an error message.  Period.

And would that be so bad, even for HTML?  After all, if IE did it, you can be sure that people would fix their markup.  If browsers had done it from the beginning, markup would not have been malformed in the first place.  (Weird and abnormal, perhaps, but not actually malformed.)  Håkon said five years ago that “be liberal in what you accept” is what broke the Web, markup- and style-wise.  It’s been a longer fight than that to start lifting it out of that morass, and the job isn’t done.

Authors of feed aggregators have similar dilemmas.  If someone subscribes to a feed, thus indicating their interest in it, and the feed is malformed, what do you do?  Do you undertake error recovery in an attempt to give the user what they want, or do you just throw up an error message?  If you go the error route, what happens when a competitor does the error recovery, and thus gets a reputation as being a better program, even though you know it’s actually worse?  That righteous knowledge won’t pay the heating bills, come winter.

“So what?” you may shrug.  “It’s not like RSS feeds can be used to breach security”.

Which is just what anyone would have said about HTTP, until very recently.

In the end, the real problem is that liberal acceptance of data will always be used.  Even if every single HTTP implementor in the world got together and made sure all their implementations did exactly the same strictly correct conservatively defined thing, there would still be people sending out malformed data.  They’d be crackers, script kiddies—the people who have incentive to not be conservative in what they send.  The only way to stop them from sending out that malformed data is to be conservative in what your program accepts.

Even then, it might be possible to exploit loopholes, but at least they’d be flaws in the protocol itself.  Finding and fixing those is important.  Attempting to cope with the twisted landscape of bizarrely interacting error-recovery routines is a fool’s errand at best.  Unfortunately, it’s an errand we’re all running.


Gatekeeper 1.5 rc3

Published 20 years, 3 weeks past

It’s update day!  I just pushed WP-Gatekeeper 1.5rc3 into the public eye.  The major change in this version is that Gatekeeper no longer prevents trackbacks (or pingbacks) from ever reaching your site.  See, before, it was effectively destroying all those without notice or appeal.  Now it just lets them through, whatever their content.

What this means is that Gatekeeper is, as it always was meant to be, a way to prevent comment-form spambots from succeeding.  Trackbots will now get through unless you take other steps, like disabling trackbacks or running another spam filter or something.  I’d actually like to see WordPress split tracks/pings apart from comments, and let you set their “always moderated” flags separately.  Thus you could set things up so all tracks and pings are moderated, but comment-form comments are not.  That would work great for me.  Maybe not so well for others, though.

Unfortunately, rc3 still has that problem where it doesn’t always automatically add a challenge to your comment forms, though you can still get the challenge by manually adding gatekeeper_pose_challenge to the comment forms in your theme.  My grep-fu (or maybe it’s my PHP-fu; or, hell, both) is weak; I can’t figure out why the routine fails.  Anyway, head on over to the Gatekeeper page if you’re interested, and especially if you can figure out why the auto-challenge routine is failing.  Thanks.


S5 1.1rc2

Published 20 years, 3 weeks past

Thanks to a comment from Pritt, the Safari arrow-key bug in S5 1.1rc1 has been, so far as I can tell, fixed.  I’m therefore releasing S5 1.1rc2, which will be the final release candidate unless any major bugs are encountered.

Also new to this revision are some slight modifications to the CSS that drives the system’s presentation.  The changes were all in the vein of changing div.slide to .slide.  Why bother?  Because with these slightly more generic rules, it’s now possible to create your slide show using a XOXO format instead of the OSF-compatible div-based markup.  (And the XOXO version may be OSF-compatible; the only real difference is that you’re using a list instead of a series of divs, but I’m not sure how much OSF cares that each slide be in a div instead of some other element with the appropriate class.)

I’ve included a XOXOized version of the testbed slideshow in the rc2 package, so feel free to check it out, if you’re interested.  Long-delayed thanks to Tantek for helping me work out the few changes that needed to be made to the CSS, and providing me with an example XOXOized S5 file so I could use it as a reference.


S5 vs. BBEdit 8.2.1

Published 20 years, 3 weeks past

Just a quick note for any of you who might be both a BBEdit user and an S5 author:  BBEdit 8.2.1, the latest update, will crash if you try to open any valid S5 presentation (I don’t know what happens with invalid files).  Apparently there’s a bug in BBEdit’s XHTML scanner that the S5 file structure triggers.  Version 8.2, which you can get from the Barebones FTP site if you don’t still have it locally, does not have the same bug, and will edit S5 files without any trouble.

The folks at Barebones are aware of the problem and have indicated that a fix will be in the next maintenance release of BBEdit.  For now, if you want to edit S5 files in BBEdit, stick to 8.2.

(And if anyone wants to take a crack at helping out with the problems in S5 1.1rc1, see my earlier post on the subject.  Thanks!)


Long-Term Visibility

Published 20 years, 3 weeks past

A fair portion of the feedback I get whenever I talk about microformats runs along the lines of “How is this any different from stuff like RDF, besides it being written using a far less structured vocabulary?”.  Tantek has laid down the basics of the answer to that question.  In a severely limited nutshell: the more visible the data, the more likely it is to be made relevant and to be kept that way.

What about search engine spamming?  Well, it’s usually easily recognizable as such by a human, so that’s in keeping with visibility and human friendliness.  If we suppose a spammer uses CSS to hide the spam from humans, as many do, it’s become invisible—exactly the same as traditional metadata, and exactly what happened to meta-based keywords before the search engines started ignoring them.  Some day (soon?) the search engines may start ignoring any content that’s been hidden, and as far as I’m concerned that would be just fine.

Now, what about farther down the road—will semantic information always have to be visible?  An interesting question.  Tantek and I have had some pretty energetic arguments about whether the kind of stuff we’re putting into microformats will eventually move into the invisible realm of Semantic Web-style metainformation.  As you might guess from his post, Tantek says no way; I’m more agnostic about it.  Not every case of structured data lends itself to being visible, and in fact making some kinds of strucuring data visible would be distinctly human-unfriendly.  There’s a reason browsers don’t (by default) display a page’s markup.

Besides, to some extent there’s invisible information in microformats, although it’s pretty much always tied to visible information (dates in hCalendar being one such example).  Sure, the class names and title values are there in the markup as opposed to off in some other file, but from a user point of view, they’re as invisible as meta keywords or RDF.  Usually it’s stuff we don’t want to be in the user’s face: markers telling which bits of content correspond to what, ISO versions of human-readable dates, that kind of thing.

Then again, the truth is that the kind of information most people want to consume and manipulate is the kind of information that lends itself to being visible.  Structuring that data in such a way that the same data is useful to both humans and machines—turning the stuff you’re showing to people into the stuff that machines process—is a much more elegant approach, and one that frankly stands a higher chance of success, at least in the short term.

(A quick example: as Andy Baio says, “If hCalendar gets popular, Upcoming.org could scrape events off of websites instead of people entering them directly into Upcoming”.  Bands, who are already maintaining their own touring pages, could mark up said pages using hCalendar, and Upcoming would just suck in the information.  The advantages?  The band’s webmaster doesn’t have to set up the tour page and then go enter all the information into Upcoming; he just creates or updates the page and can then ping Upcoming, or wait for its spider to drop by.  The visible information, which is structured in a machine-parsable way, only has to be updated once.  Of course, the same would be true with regard to any event aggregator, not just Upcoming, and that’s another advantage right there.)

But will the semantic information stay baked into the visible information?  That’s a harder trend to forecast.  I remember when presentation was baked into the structure, and it’s been a massive struggle to get the two even partially separated.  On the other hand, it makes sense to me to pull presentation and structure apart, so that the former can rest upon the latter instead of having them bolted together.  I’m not sure it makes sense to do the same with semantics and structure.  Of course, what that really means is that I don’t think it makes sense to argue for their separation now.  Perhaps we’ll look back in a decade or two and, with new approaches in hand, chuckle over the thought that we’d ever bolted them together.  Alternatively, perhaps we’ll look back from that vantage and wonder why we ever thought the two could, let alone should, be separated.

In either case, it seems clear to me that the way forward is with visible data being used both for human and machine consumption; that is, with the microformat approach.  It’s a lightweight, easily grasped, infinitely extensible, and infinitely flexible solution, totally in keeping with the design principles that underpin the Web itself.


Browse the Archive

Earlier Entries

Later Entries