Posts in the Web Category

A List Apart Returns

Published 19 years, 3 months past

A List Apart is back in business and sporting a radically new design.  Check it out!  Four columns on the main page?  Yes indeed!

I’m proud to say I had a hand in the redesign process, taking the visual goodness of Jason Santa Maria and turning it into living, breathing XHTML and CSS.  Keeping the pages from going completely crazy in broken browsers was an interesting challenge at times, but overall I think things came together rather nicely.  There may be a few glitches here and there, though we did our best to test widely and often, but if so we’ll handle them as they arise.

It was good fun working with the talented team members in this process, and I especially enjoyed being able to concentrate on what I know—building XHTML and CSS around existing designs—and leave the rest to other people who knew their stuff as well as I know mine.  Due to the strategic partnership between Complex Spiral Consulting and Happy Cog Studios, I look forward to assuming that role more often, and on ever more interesting projects.

Addendum: it seems the DNS change to point to ALA’s new Textdrive home hasn’t made it as far as I’d thought, so I’ll point you to the numeric IP address; that way, you can see it even if your local DNS hasn’t caught up yet.  Sorry for any confusion!

Addendum 2: it’s been long enough that the DNS change should have made it to all the far-flung corners of the net, so I’ve removed the numeric IP addresses.


On Blinksale

Published 19 years, 3 months past

Partially because it’s been touted as a great XHTML+CSS-based application, and partially because I could use better invoice management, I signed up for a free Blinksale account.  Having spent some time fiddling around with it, I’ll be the first to say that I’m pretty impressed by what’s under the hood.  The markup is just about as clean as a Web application gets, and it generally uses the right elements in the right places.  It might be a little div-heavy, but that’s not an easy thing to avoid.  The gang at Firewheel has done solid work there.

On the other hand, the visual design of Blinksale totally hurts my eyes.  Those are some amazing shades of green, boys.  I really wish they didn’t clash in quite that way.  Also, the entire application feels rather like a copy of Basecamp, from the way it’s organized to the ability to get activity feeds to the “Remember me for 2 weeks” login option.  Those are, of course, all nice options to have in this sort of application, but they still feel like copies.

The help system, on the other hand, turned out to be an enormously deep resource once I drilled in a bit.  Just about anything you could possibly want to know about Blinksale is in there somewhere, I’d wager.  Firewheel has definitely raised the bar there, and gets an enthusiastic round of applause for it.

Beyond that, Blinksale seems like it would be great for hourly consulting, or for invoicing items that are shipped to the customer.  For my purposes, though, it isn’t really an invoicing system.  Most of my work involves traveling to clients to conduct in-person training, so in addition to the consulting fee, there are expenses to bill and receipts to submit.  In some cases, there are clients who would likely refuse to accept a web-based invoice.

So I could use it as a way to track which invoices have gone out and which have been paid, but I could do that with an Excel spreadsheet or a Filemaker Pro database, or heck, I could even whip up my own little PHP/mySQL solution.  Adding in all the extra stuff, like e-mailed invoices and reminders and thank-yous, would be time-consuming, and it would be a truly major effort to add my own PayPal integration, as Blinksale has done.  It probably wouldn’t be anywhere near as polished (although it also wouldn’t have those retina-searing color combinations).  For basic invoice tracking, though, I’d be able to do everything Blinksale offers me, and not have limitations like only being able to store a total of three clients, or being limited to three invoices per month.

Now, remember, I’m talking about what it will do for me.  I’d like to stress that my situation is somewhat unique: not many freelance consultants earn a living in training.  For a freelance designer or even a small design shop, I can totally see Blinksale as being a great application to use.  I doubt I’ll see a need to upgrade to a paid account—but your mileage, as ever, may vary.


Don’t Read; Speak!

Published 19 years, 4 months past

With the debut of the WSP‘s ATF, a vigorous conversation has gotten underway.  Joe Clark weighed in with some suggestions, Andy Clarke got some rousing comment action, and more have spoken up.  This follows some recent and widely-cited thoughts from Matt May on WCAG 2.0 (with opposing view from Gez Lemon), and from Andy Clarke regarding accessibility and legislation (which inspired the publication of a different view from Andy Budd, not to mention another from Chris Kaminski).  I’ll join the chorus with some points of my own.  (Apparently, my recent post Liberal vs. Conservative was taken as a contribution to the discussion, which it wasn’t meant to be, although the points raised there are definitely worth considering in this context.)

This past May, I delivered a keynote at the 2nd International Cross-Disciplinary Workshop on Web Accessibility in Tokyo, and one of the major points I made was basically this: “Screen readers are broken as designed, and need to become speaking browsers”.

The problem is that screen readers are just that: they read what’s displayed on the screen for a sighted user.  In other words, they let Internet Explorer render the Web page, scrape the visual result, and read that.  I will acknowledge that in the tables-and-spacers era of design, this made a certain amount of sense.  That era is ending; in an important sense, it’s already over and we’re just cleaning up the mess it left.  Which is not to say that table markup is never and should not presently be used for layout purposes, nor is this to say that such markup should be used.  Okay?

What I’m saying is that screen readers need to become speaking browsers: they need to ignore how the page is visually displayed, and read the content.  Use semantic markup when it exists, and otherwise ignore the markup in favor of the actual words, whether it’s plain text or alt text.  Go from the beginning of the document to the end of the document, and ignore the CSS—at least that CSS which is meant for visual media, which these days is pretty much all of it.

You might wonder how a speaking browser should deal with a table-driven site, of which there are still quite a few, he said with some understatement.  One distinct possibility is to do what I just said: ignore the non-semantic markup and read the content.  I can accept that might fail in many cases, so I’ll present a fallback: DOCTYPE switching.  If a document has a DOCTYPE that would put a visual browser into standards mode, then be a speaking browser.  If not, then be a screen reader.

DOCTYPE switching has been, despite a few hiccups, incredibly successful in helping designers move toward standards, and allowing browsers to permit standards-based design without sacrificing every page that’s come before.  The same, or at least a very similar, mechanism could help audible-Web tools.

The WaSP has done great things in their efforts to show vendors why Web design tools should produce standards-oriented markup and CSS.  I sincerely hope they can produce similar results with audible-Web vendors.


Liberal vs. Conservative

Published 19 years, 5 months past

So it turns out that crackers can mess up your Web site with nothing more than a malformed HTTP packet.  You might think something as simple as HTTP would be basically risk-free, but no, I’m afraid not.  All it takes is interaction between programs that handle HTTP data slightly differently, and hey presto, you’ve got a security hole.

Ben Laurie weighed in on this:

“It is interesting that being liberal in what you accept is the base cause of this misbehaviour,” Laurie says. “Perhaps it is time the idea was revisited.”

That’s a reference to the late Jon Postel‘s dictum (from RFC 793) of “be conservative in what you do, be liberal in what you accept from others”.  This is done in the name of robustness: if you’re liberal in what you accept, you can recover from data corruption caused by unanticipated problems.

Laurie’s right.  The problem is that being liberal in what you accept inevitably leads to a systemic corruption.  Look at the display layer of the Web.  For years, browsers have been liberal in what markup they accept.  What did it get us?  Tag soup.  The minute browsers allowed authors to be lazy, authors were lazy.  The tools written to help authors encoded that laziness.  Browsers had to make sure they could deal with even more laziness, and the tools kept up.  Just to get CSS out of that death spiral, we (as a field) had to invent, implement, and explain DOCTYPE switching.

In XML, it’s defined that a user agent must throw an error on malformed markup and stop.  No error recovery attempts, just a big old “this is broken” message.  Gecko already does this, if you get it into full-on XML mode.  It won’t do it on HTML and XHTML served as text/html, though, because too many Web pages would just break.  If you serve up XHTML as application/xml+xhtml, and it’s malformed, you’ll be treated to an error message.  Period.

And would that be so bad, even for HTML?  After all, if IE did it, you can be sure that people would fix their markup.  If browsers had done it from the beginning, markup would not have been malformed in the first place.  (Weird and abnormal, perhaps, but not actually malformed.)  Håkon said five years ago that “be liberal in what you accept” is what broke the Web, markup- and style-wise.  It’s been a longer fight than that to start lifting it out of that morass, and the job isn’t done.

Authors of feed aggregators have similar dilemmas.  If someone subscribes to a feed, thus indicating their interest in it, and the feed is malformed, what do you do?  Do you undertake error recovery in an attempt to give the user what they want, or do you just throw up an error message?  If you go the error route, what happens when a competitor does the error recovery, and thus gets a reputation as being a better program, even though you know it’s actually worse?  That righteous knowledge won’t pay the heating bills, come winter.

“So what?” you may shrug.  “It’s not like RSS feeds can be used to breach security”.

Which is just what anyone would have said about HTTP, until very recently.

In the end, the real problem is that liberal acceptance of data will always be used.  Even if every single HTTP implementor in the world got together and made sure all their implementations did exactly the same strictly correct conservatively defined thing, there would still be people sending out malformed data.  They’d be crackers, script kiddies—the people who have incentive to not be conservative in what they send.  The only way to stop them from sending out that malformed data is to be conservative in what your program accepts.

Even then, it might be possible to exploit loopholes, but at least they’d be flaws in the protocol itself.  Finding and fixing those is important.  Attempting to cope with the twisted landscape of bizarrely interacting error-recovery routines is a fool’s errand at best.  Unfortunately, it’s an errand we’re all running.


Web Essentials 05

Published 19 years, 6 months past

Just as I prepare to leave for WWW2005 in Japan, John Allsopp has announced the details for Web Essentials05  in Sydney this September.  Everyone’s fave Molly kicks things off with a keynote, and there will be some great speakers: Tantek Çelik, Jeff Veen, Kelly Goto, Derek Featherstone, Douglas Bowman, Russ Weakley, Cameron Adams, John Allsopp himself, and more.

Oh, and me.  I’ll be there, too.  You can get all the details at the WE05 web site.  I heard great things about WE04, so I’m really looking forward to WE05.  Hopefully I’ll see you there!  It’ll be a fair dinkum, and very likely truly bonzer, no worries.

Did I use any of those colloquialisms correctly?


Deep Linking, Shallow Thinking

Published 19 years, 8 months past

So a few weeks ago you might have noticed a bit of brouhaha that surrounded the new Terms and Conditions for Orbitz.com, set to go into effect today.  For anyone who missed or forgot about it, a refresher:  in Section 6, you find this wonderful bit of total cluelessness:

We reserve the right to require you to remove links to the Site, in our sole discretion.

Linking to any page of the Site other than to the homepage is strictly prohibited in the absence of a separate linking agreement with Orbitz.

So under their Terms and Conditions, it would be forbidden for me to point to a press release that announces Orbitz suing some former employees; or to their mangled-markup list of press releases; or, for that matter, to a medium-resolution JPEG of the Orbitz logo (which is rather ominously referred to in the Terms and Conditions as a “Mark of Orbitz”, which sounds like something that might have been mentioned in the first draft of the Book of Revelation).

It should be noted, however, that Section 4 of the old Terms and Conditions contains this amazing little gem:

You agree not to create a link from any Web site, including any site controlled by you, to our site.

Because nothing could be worse than increasing traffic to your site.

So yes, this post is in complete violation of the both the old and new Terms and Conditions for Orbitz.com.  And if I had ever been, or ever planned to be, a customer of Orbitz—thus agreeing to said Terms and Conditions—that might actually bother me for a second or two.  But, as they say:

If you do not accept all of these terms, then please do not use these websites.

Boys, you got yourselves a deal.


More Spam To Follow

Published 19 years, 10 months past

So… rel="nofollow".  Now there’s a way to deny Google juice to things that are linked.  Will it stop comment spam?  That’s what I first thought, but I’ve come to realize that it’ll very likely make the problem worse.  In the last few hours, I’ve been hearing things that support this conclusion.

First, the by-now required disclaimer: I think it’s great that Google is making a foray into link typing, and I don’t think they should reverse course.  For that matter, it would be nice if they paid attention to VoteLinks as well, and heck, why not collect XFN values while they’re at it?  After all, despite what Bob DuCharme thinks, the rel attribute hasn’t been totally ignored these past twelve years.  There is link typing out there, and it’s spreading.  Why not allow people to search their network of friends?  It’s another small step toward Google Grid… but I digress.

The point is this: rather than discourage comment spammers, nofollow seems likely to encourage them to new depths of activity.  Basically, Google’s move validates their approach: by offering bloggers a way to deny Google juice, Google has acknowledged that comment spam is effective.  This doesn’t mean the folks at Google are stupid or evil.  In their sphere of operation, getting comment spam filtered out of search results is a good thing.  It improves their product.  The validation provided to spammers is an unfortunate, possibly even unanticipated, side effect.

There is also the possibility, as many have said, that nofollow will harm the Web and Google’s results, because blindly applying a nofollow to every comment-based link will deny Google juice to legitimate, interesting stuff.  That might be true if nofollow is used like a sledgehammer, but there are more nuanced solutions aplenty.  One is to apply nofollow to links for the first week or two after a comment is posted, and then remove it.  As long as any spam is deleted before the end of the probation period, it would be denied Google juice, while legitimate comments and links would eventually get indexed and affect Google’s results (for the better).

In such a case, though, we’re talking about a managed blog—exactly the kind of place where comment spam had the least impact anyway.  Sure, occasionally the Googlebot might pick up some spam links before the spam was removed from the site, but in general spam doesn’t survive on managed sites long enough to make that much of a difference.

Like Scoble, where I might find nofollow of use would be if I wanted to link to the site of a group or person I severely disliked in order to support a claim or argument I was making.  It would be a small thing, but still useful on a personal level.  (I’d probably also vote-against the target of such a link, on the chance that one day indexers other than Technorati‘s would pay attention.)

No matter what, the best defenses against comment spam will be to prevent it from ever appearing in the first place.  There are of course a variety of methods to accomplish this, although most of them seem doomed to fail sooner or later.  I’m using three layers of defense myself, the outer of which is currently about 99.9% effective in preventing spam from ever hitting the moderation queue, let alone make it onto the site.  One day, the layer’s effectiveness will very suddenly drop to zero.  The second layer was about 95% effective at catching spam when it was the outer layer, and since it’s content-based will likely stay at that level over time.  The final layer is a last-ditch picket line that only works in certain cases, but is quite effective at what it does.

So what are these layers, exactly?  I’m not telling.  Why not?  Because the longer these methods stay off the spammers’ radar, the longer the defenses will be effective.  Take that outer layer I talked about a moment ago: I know exactly how it could be completely defeated, and for all time.  Think I’m about to explain how?  You must be mad.

The only spam-blocking method I can think of that has any long-term hope of effectiveness is the kind that requires a human brain to circumvent.  As an example, I might put an extra question on my comment form that says “What is Eric’s first name?”  Filling in the right answer gets the post through.  (As Matt pointed out to me, Jeremy Zawodny does this, and that’s where I got the idea.)  That’s the sort of thing a spambot couldn’t possibly get right unless it was specifically programmed to do so for my site—and there’s no reason why any spammer would bother to program a bot to do so.  That would leave only human-driven spam, the kind that’s copy-and-pasted into the comment form by an actual human, and nothing besides having to personally approve every single post will be able to stop that completely.

So, to sum up: it’s cool that Google is getting hip to link typing, even though I don’t think the end result of this particular move is going to be everything we might have hoped.  More active forms of spam defense will be needed, both now and in the future, and the best defense of all is active management of your site.  Spammers are still filthy little parasites, and ought to be keelhauled.  In other words: same as it ever was.  Carry on.


Structural Naming

Published 20 years, 4 months past

After I threw out my two cents on ID naming conventions, Andy Clarke revisited the subject and made some more detailed proposals.  As I said before, I think this is a good conversation to be having.  However, the reactions of some people make me think that I wasn’t entirely clear about why.

A standard nomenclature offers the ability to restyle sites, sure.  That’s kind of cool in an übergeek kind of way, like making jokes involving TCP/IP terminology or wearing a T-shirt that says SELECT * FROM users WHERE clue > 0.  That isn’t really the primary reason why I support the exploration of ID naming conventions.  I’d like to see those conventions emerge because they will serve as a useful starting point for beginners in the standards-oriented design field.  It would help reinforce the idea of structural naming, as opposed to presentational naming.

We’re all familiar with presentational naming.  It’s things like id="leftbar" and id="pagetop".  In terms of layout, it’s the equivalent of <b> and <font>.  Structural naming, on the other hand, encourages the author to ask “what role does this piece of the document play?”  Rather than obsess over the visual placement of the navigation links, structural naming gets authors to consider the document structure.  This can’t be anything but good, at least for those of us who want to promote improved structures.  To pick one set of examples from Andy’s recent post:

#branding
Used for a header or banner to brand the site.
#branding-logo
Used for a site logo
#branding-tagline
Used for a strapline or tagline to define the site’s purpose

While #branding is described as “Used for a header or banner…” you may note that the actual ID name has nothing to do with visual placement.  It’s all about identifying the (dare we say it?) semantic role of that bit of the document.  By encouraging that thinking, a structural-naming convention keeps the author in that frame of mind when he has to go outside the common set of names.

I see this as being much like the often-promoted ‘rule’ that link-state styles should go in the order link-visited-hover-active.  I even wrote an article explaining why that order is important.  Here’s the thing: once you understand why the order is important, you can break the ‘rule’ in creative ways.  For example, suppose you want your hover effect to apply only to unvisited links, whereas visited links should get no hover.  No problem!  Just put them in the order link-hover-visited-active, or even link-hover-active-visited if you want visited links to get no active styles, either.

(Side note: if you chain pseudo-classes, such as with a:visited:hover, then the ordering problem goes away.  If Explorer supported that syntax, we could all move on from the LVHA rule.  Too bad it doesn’t.)

Conventions and ground rules exist for a reason: to provide a lower barrier to entry, and to help guide those new to the field.  Once you become experienced, you can break the rules in creative ways.  It’s been said that the key to good jazz improvisation is a thorough understanding of the rules of music.  In other words, once you really know those rules, then you know how to break them.  In order to know the rules, though, there have to be rules.

That’s why I’m glad to see them starting to emerge in blog postings and the public thinking of people in the field.  The development of these rules is not a barrier to creativity, but an enabler of it.


Browse the Archive

Earlier Entries

Later Entries