Posts in the Standards Category

Bingo

Published 20 years, 2 months past
Standards don’t matter because it’s the applications that matter. (To the legions of people who lay this platitude on me, pretty much on a daily basis, thank you. I wouldn’t have been able to figure it out without you. I would just sit around working on a 500 page spec for its own sake, because that’s so much fun.) But try to scale these applications up, and try to reuse your content, across an enterprise or over the Internet, without standards. Just try. And even if you don’t want to use standards, your customers will eventually make you, because by now they have gotten tired of paying you too much money to rewrite the same content over and over and over again for each new application use, each new platform.

From “Like a Phoenix From the Ashes: X3D and the Rebirth of Reason” by Tony Parisi.  He says it better than I ever could, and what he says is just as relevant to our corner of the Web as it is to his.

A tip o’ the hat to the WaSP (who also recently talked about the Microsoft redesign) for pointing it out.

  • Bingo was published on .
  • It was assigned to the Standards category.
  • There have been no replies.

Microsoft Migration

Published 20 years, 2 months past

As has been reported in a bunch of places, Microsoft’s home page has been redesigned and uses standards-oriented design principles, eschewing tables and spacer GIFs for CSS-driven layout.  The weight of their home page’s HTML document dropped by almost three-quarters—it’s 27.5% the size of the IE-specific version of the previous design, and 30.5% the size of the non-IE version.  It still uses a few tables for layout (about one-sixth what it did before) and throws some validation errors, but not in overwhelming numbers.  It seems that they’ve even dropped browser sniffing, and are serving the same document to everyone.  Doug has more details, for those who are curious.

I think this is absolutely incredible, and to be applauded.  The Web Standards Project should already have given them a round of applause, in my opinion, and hopefully will do so very soon.  It may seem like odd timing given the recent launch of the WaSP’s Browse Happy campaign, but the Web site and the browser are two very different things.  There’s no contradiction in discouraging use of the latter while applauding the improvements of the former.

In my rounds of the posts and comments about this redesign, I saw a couple of people say things to the effect of “this is nothing important, the markup is still crap, and MS shouldn’t be praised simply for producing less crap than before”.  Um… yes they should.  As far as I’m concerned, any company or other organization that makes a standards-oriented move deserves praise for their efforts.  We might discuss what else they can do to improve further, but to slam someone who’s doing the right thing for not immediately achieving utter perfection (for whatever value of perfection happens to be in vogue at the time) seems like the most counterproductive act I can envision.  Well, besides launching DDoS attacks.  This is a time to encourage them to continue, not give them ample reason to reverse course.

Analogy: if your significant other voluntarily cleans up after dinner, but misses a couple of the dirty dishes, which is more likely to get the job done?

  1. Yell at them for not doing a complete job and storm off.
  2. Thank them for doing the dishes, and gently point out that they missed a couple.  Offer to do them yourself.

I bet the second one works in most cases.  In fact, we could offer to do the “missed dishes” by figuring out fixes for the remaining errors, and offering them for use.

I wish I could say I’d had something to do with their redesign through Complex Spiral Consulting, but I didn’t.  Doug probably wishes he could say something similar with regards to Stopdesign, but from what I can tell he wasn’t involved either.  Mike Davidson is in their area, but since he hasn’t said anything yet I’m guessing that he’s also an observer.  The initial impression is that this was an internally-driven effort, one that has paid off in bandwidth savings, efficient coding, and more.

So I’m taking this opportunity to salute the efforts of the microsoft.com team.  Well done!


SES San Jose Corrections

Published 20 years, 3 months past

A few days ago, I posted the entry Silly Expert Opinions, in which I made some snide comments and rebutted some points related in a post at compooter.org.  In so doing, I fell victim to one of the classic blunders:

Never take someone to task for saying something you weren’t there to hear.

…because it may turn out they didn’t actually say it, or didn’t mean it in the way it was reported.

In the comments on the compooter.org post, the SES conference organizer Danny Sullivan (founder and editor of Search Engine Watch) and two of the panelists have calmly and professionally explained the other side of the story—the one where some of the points attributed to them were never made, some were seriously spun, and others were taken out of context.  The comments are well worth reading from about #12 on, that being Danny’s first post.  See also the thread “SES slammed by designers” at the Cre8asite forums.  (Although I should note once more that I’m not a designer.)

Unfortunately, my post triggered other posts, such as one at Molly’s crib and a  WaSP Buzz post this morning (thankfully there’s a more detailed followup).  We all fell victim to the blunder, but I fully take the blame for kicking things into high gear.  I sometimes forget that the entries I post are read and taken seriously by a whole lot of people; that my words have, at least in some circles, a certain weight.  And sometimes I let my penchant for smart-assed commentary get ahead of my more sober desire to speak with intelligence and accuracy.  My post of last Friday is such an example, and I’m sorry it’s caused confusion.  I apologize not only to the panelists and to Danny, but to anyone I inadvertently misled.

In my post, I did posit the idea that I might get into the SEO conference circuit, and now I have that ability, thanks to Danny’s deep professionalism—he could have easily, and with good reason, flamed me in e-mail and left it at that.  He didn’t.  He treated me with respect (probably more than I deserved) and opened the door I’d tried to slam.

In the afternoon WaSP post, Chris Kaminski said:

Here’s an idea: perhaps we standards folks and the SEO crowd should do a bit of knowledge sharing?  In the comments, Danny Sullivan said he’s already asked Eric Meyer to do just that, with an eye towards a possible speaking slot at an upcoming SES no less. That’s a great start. But I think we can do more. I think there’s gold to be found at the intersection of SEO and standards, or at least some good web development.

Let’s keep the beginning of dialogue in the comments to the compooter.org post, throw out the flames and ignorance, and use it to build a better set of best practices for web development. One that accounts for standards, accessibility, usability and search engines.

I agree wholly with Chris: let’s keep the dialogue going.  We’re lucky that the opportunity arose and wasn’t soured by me shooting off my mouth.  It’s time to see what can be done to harmonize the two fields, and where things can be improved.  I’m going to see what I can do about taking Danny up on his offer to attend an SES conference in the future.

I’m particularly interested because it seems, reading between the lines, that standards-oriented design isn’t as search-engine friendly as I’d thought (although it’s certainly much better than most alternatives).  Peter Janes created a test of Google’s treatment of heading levels, and the results weren’t exactly encouraging.  It bothers me that standards-oriented design and search engine optimization might be at odds, whether partially or fully.  This is definitely something that needs to be cleared up.  The results could affect the future evolution of search engines, which is a goal worth pursuing.

If you have ideas about how to get there faster, or have search engine tests of your own to share, let us know.


Silly Expert Opinions

Published 20 years, 3 months past

Live, from San Jose, it’s the craptastic adventures of SES San Jose 2004!  Apparently a trio of SEO experts decided to share their views on “Advanced Design Issues”, which included the use of Javascript and CSS.  Because, obviously, expertise in one area automatically confers expertise in another, by which logic I will now declare myself an expert in cellular biology.  Photo Matt, from whom I got this URL, drily observed “write HTML like it’s 1997”.  Funny, but not quite accurate.  Nobody ever wrote Web pages like these people advocate.  The idea that some people might do so is a troubling thought.  Note to the SES conference organizers: in the future, you might want to think about having people stick to what they know, so as to avoid diluting your conference’s value.

I’m not going to go through and rebut every point, because it would be pointless to do so.  Most readers of this site will be able to formulate their own rebuttals, assuming of course that they aren’t incapacitated by laughter, astonishment, or some combination of the two.  I will draw attention to the one point they might have gotten partly right.  It’s:

Image replacement makes your site inaccessible.

Much has been written about various image replacement techniques and the accessibility problems they present, so that sounds like our trippy triumvirate could have been on the money there.  However, were they talking about replacing text with images, as with the FIR and so on, or replacing images with text?  Because if it’s the latter, then we have a case of fully consistent unreality.  And where do they get off worry about accessibility, given the rest of their advice?

In light of recent commentary from Jeffrey and Molly, among others, I was particularly interested to read the following:

Do not put the entire contents of your page in an <h1>, rather put only half inside an <h1> and stick the other half in an <h2> or other header tag.

It would seem that, at least for some segments of the industry, we are indeed wasting our energy talking about the proper use of heading levels.  That’s way, way too far ahead of where they are right now.  We need to be telling them about the fundamentals of how document ought to be structured, in the optimistic hope that they’ll one day  After all:

Don’t validate your code under any circumstances because hierarchically correct and valid markup is of no use to a search engine.

On which note, can anyone confirm that engines like Google actually make use of heading elements in determining page rank?  I’m looking for a link to actual results demonstrating the effect of headings on Google’s ranking of a page; if you have one handy, kindly drop it to me via e-mail.  I’d just like to know one way or the other.  Google employees are particularly encouraged to write me about this.

I suddenly have the urge to do a round of the SEO conference circuit to set the record straight.  I wonder how one might go about that?  After all, I’m more of an SEO expert than at least two of these SEO “experts”: Googling for “Eric Meyer” gets you my home page as the #1 result, but searching for one name returns his site at #3, just behind an Assistant Professor of Engineering Management at University of Missouri-Rolla and a blog written by a fan of Howard Dean.  The other one didn’t even show up in the first ten Google results, so far as I could tell, and I was searching for his name.

So perhaps it’s time I looked into what it takes to be a presenter at SES/SEO conferences.

Update: please see SES San Jose Corrections for more information, and a much different story than was originally posted here.


Hypertext 2004

Published 20 years, 3 months past

So here I am in Santa Cruz, California, at the Hypertext 2004 conference. Tantek Çelik and Eric Meyer flank a conference poster presenting XFN. Our XFN poster presentation, in its full A0 glory, is up for everyone to see.  We’re really looking forward to hearing the conference attendees’ feedback.

Yesterday I presented a full-day tutorial on standards-oriented design which seems to have been well received by those who attended.  In the afternoon portion I presented a standards-oriented makeover of acm.org, which has some of the worst markup I’ve ever seen.  Go ahead, take a look at the HTML source for the links in the right-hand sidebar.  You’ll be astonished.

Then again, yesterday I went to download the conference program and discovered it was a link to a PDF file with PHP session ID information appended to the URL.

(A short pause while I contemplate some choice words.)

What they pretty obviously did was take the PDF that was used to create the printed program and throw it online, which I suppose I can understand.  It’s easy to take an existing file and just publish it.  In addition, the Thursday keynote is a case study of the creation of the PDF format.  But c’mon, guys—how about some hypertext, maybe?  Just a little?  Even links in the PDF?  The content lends itself beautifully to structural HTML, actually.

Oh well.  The cobbler’s children and all that, I suppose.


Creative Dissidence

Published 20 years, 3 months past

Self-described ‘Web hobbyist’ François Briatte recently conducted a survey of ten sites, comparing them on 25 different points, and has published the results.  I suggested that those responsible for the sites that were reviewed might write up their perspectives on the rankings and their dissidence levels, as Dave Shea and Jon Hicks have already done.  I tied for John Gruber for third, which amuses me.

I’ve seen some criticism of this survey scattered about, with two points made:

  1. The survey is too small, only examining ten sites.
  2. The survey is too limited in terms of the kinds of sites studied (all ten are basically Web design blogs, to one degree or another).

True, François picked ten sites instead of several hundred, all ten being sites that François reads regularly, and looked at them in detail.  Well, at least he went to the effort of doing it and publishing the results for free.  Don’t like the number of sites, or the sites surveyed, or the questions that were asked?  Fine.  Do your own survey and publish the results.  We’re waiting.  I’d actually like to see more such surveys, especially those done with the impressive degree of thoroughness and thought that François displayed.

Here are my thoughts about meyerweb’s rankings and the criteria in general; I’ll stick to commenting on those points where I differed from the crowd, or else where I think there’s something particularly interesting to note.

Are links underlined / are hovered links underlined / do visited links differentiate? (yes on all three)

As it turns out, my site is acting as a mirror for François’ browser preferences: I don’t assert link styles at all.  What I’d be interested to know is how many of the surveyed sites follow the same path.  My guess is few, or none.  Even Jakob‘s style sheet asserts link styles.

Is the layout fixed in width? (no)

Mine is not, and that leaves me in a strong minority.  I prefer a layout that flexes with the browser window, and I suspect I always will.  This does mean that the layout can start overlapping itself at very narrow window widths, but then a fixed-width design forces a scrollbar at narrow widths.  Which is better?  I’ve made my choice, of course, as have all the others surveyed.  I’ve yet to decide one way or the other is truly better, although of course many people have very strong opinions.

Is there a search box? (no)

I should probably add one, but for some reason it just never seems like a critical priority.  There is a search function on the archive pages for “Thoughts From Eric”, but that comes built into WordPress.  It may be that my reluctance stems from the fact that I’d probably end up using a Google search box, which seems like cheating.  I should probably get over myself and just do it.

Are “steal these” buttons used? (yes)

Dave Shea’s reaction was: “Ugh. Please, no.”  I think they’re useful, both for RSS feeds and for the “XFN Friendly” badge.  I’d also like to think they’re well integrated into my design (such as it is), although obviously that’s a matter of taste.

Does the navigation bar use image rollovers? (no)

Nope, it’s all text.  I may one day get fancier about the design (don’t hold your breath) and if so, I’ll probably have some kind of image rollover for the navigation.  At the least, I’d use a Sliding Doors-type decoration, which could count as an image rollover effect.

Is there a print stylesheet? (yes)

I have to admit to some surprise that the majority of the sites didn’t use one.  Mine is pretty low-key, doing basic things like preventing the sidebar from being printed.  I wish more sites did that kind of thing.  A link-filled sidebar is as useless on the page as it is useful on the Web.

Is the page UTF-8 encoded? (no)

I’m still kickin’ it ISO-8859-1 style.  This is in part because when I tried to enable UTF-8 encoding a while back, all my characters got thoroughly mangled.  I spent some time trying to fix it, and then dropped back to 8859, which I knew would work.  If I ever figure out how to do UTF-8 correctly, I’ll probably switch over.

Is the DOCTYPE Strict / is the page XHTML / is there an XML prolog? (no on all three)

I use HTML 4.01 Transitional, so clearly there wouldn’t be an XML prolog.  Even if I used XHTML, there wouldn’t be one, since its presence triggers quirks mode in IE6/Win—thus the 100% agreement among the surveyed sites on its absence.

So why HTML instead of XHTML?  Because there continue to be no major advantages to valid XHTML over valid HTML, which is what I strive to attain.  In some sense, there are disadvantages, albeit of a minor variety—I find trailing slashes on empty elements and the lack of attribute minimization to be annoying.  If I’d learned XHTML first, I’m sure I wouldn’t care about them, and would wonder why HTML was deficient in those areas.  Since I taught myself HTML when the cutting edge was HTML 2.0, I have some fairly deeply ingrained habits that cause me to lean toward HTML.  I also have a decade’s worth of documents that I don’t really feel like trying to convert to XHTML just so I can claim big markup geek bragging points.  Sure, there are tools that will do it for me.  Then I’d have to go back and check to make sure the tools didn’t mangle anything.

I’ve gotten the occasional critical e-mail about this over the past couple of years, chastising me for not being a role model in this area.  I like to think that I am.  I’m trying to show, by example, that there’s nothing wrong with using HTML 4.01 as long as you’re using it correctly.  If your HTML is valid, you’ll have no more trouble converting it to a ‘pure’ XML format than you will if XHTML is your starting point.  So I stay with HTML, and probably will for some time to come.

One other point: the education of my quotes is entirely due to WordPress, and in pages that aren’t part of WP I don’t have educated quotes.  C’est la mort.

I’ve already written François to congratulate him on his work, which I think is a good look at the underpinnings of the sites surveyed and sheds some light on points that don’t get discussed very often—or when they are, it’s in needlessly polarizing ways (witness the various flame wars over fixed width vs. liquid width, HTML vs. XHTML, and so on).  It’s refreshing to see someone collect some facts, do a little analysis, and freely share the results.


Pick A Heading

Published 20 years, 4 months past

Thomas Jogin recently posted an interesting entry that asks about HTML headings.  He starts out this way:

For some time now, something has been bothering me about the common way of marking up a document structure. First of all, it doesn’t seem like there is a consensus on wether or not to use multiple H1 tags, or any at all.

I agree: there isn’t a consensus on that point.  There is also a lack of consensus on his second question:

Secondly, since H1-H6 are supposed to denote the structure of a document, do they really have a place in the side column as section headlines?

In fact, there’s very little consensus on heading use in general, except that you should use them instead of font size="+3" and that sort of thing.  Headings make for more visibility in Google than does presentational markup, after all.

In considering Thomas’ question, Andy Budd goes to the specification, which is always a good place to start.  As he points out, HTML 4.01, section 7.5.5 states:

A heading element briefly describes the topic of the section it introduces. Heading information may be used by user agents, for example, to construct a table of contents for a document automatically.

There are six levels of headings in HTML with H1 as the most important and H6 as the least. Visual browsers usually render more important headings in larger fonts than less important ones.

So as far as a semantic definition of the heading elements goes, all we have is that heading levels indicate degrees of importance.  Nothing about what order they have to be in, or whether you can skip levels, or anything else besides the creation of a spectrum of importance, as it were.  Because that’s all we have, there is plenty of room for people to fill in their own interpretation of what’s best.  I approach headings as merely indicating a level of importance, and don’t bind myself to a decreasing numeric order.  That’s my take on it; others feel differently.  Let’s take a look at two cases where my approach led to very different results.

The first case is Netscape DevEdge.  Here’s the markup skeleton for the first part of the home page as I set it up (the actual markup as of your reading this may be different):

<h1>
 <a href="/" target="_top"><span>Netscape</span> DevEdge</a>
</h1>

<form action="/search/app/" id="srch" method="get">
<h4><label for="search-input">Search</label></h4>
(...inputs...)
</form>

<h2>Netscape 7.1 is now available</h2>
(...paragraph text...)
<h2>The DevEdge RSS-News Ticker Toolbar</h2>  
(...paragraph text...)
<h3>Recent News</h3>
(...list of links...)

Here, the order is h1-h4-h2-h2-h3.  I guarantee you that some readers are feeling a deep outrage right now, because we got more than one piece of feedback berating us for putting the headings ‘out of order’.  The h4 was clearly out of place, we were told, and needed to be fixed.

The placement of the search markup in the document was dictated by our preferred document linearization, however.  In an unstyled view (a really old browser, say, or a cellphone text browser) we wanted the masthead to be immediately followed by the search feature, and that to be followed by the main content.  The main content was in turn followed by the sidebars, which was followed by the navigation and configuration links, and finally the footer.  So we weren’t about to move the search markup just to make its heading level fit into a descending-number hierarchy.

We could have elevated the heading to an h2, but that struck me as being a poor choice.  Is the “Search” title really as important than the top headlines on the site?  I certainly didn’t think so.  My only other option was to change the h4 to some non-heading element and style it that same as I had the h4  That’s certainly easy to do, since with CSS you can make any non-replaced element look like any other element, but that would mean the search area had no heading at all.  That also struck me as a poor choice.

So I took the route I did.  I suppose I could have, as Richard Rutter suggests, put in some extra headings to increase the structure and then “switched off” their display with CSS.  That would mean inserting an extra h2 and h3 between the masthead and the search.  What would they say?  Personally, I’m not thrilled with that idea either, at least for the DevEdge case.

The second case I’d like to consider is meyerweb.com itself.  Here’s the basic structure of the home page, at least as of this writing:

<div id="sitemast">
<h1><a href="/"><span>meyerweb</span>.com</a></h1>
</div>
<div id="thoughts">
<h3 id="thtitle"><span>Thoughts From Eric</span></h3>
<div class="entry">
<h4 class="title"><a href="..." title="...">entry title</a></h4>
<h5 class="meta"><b>entry date</b></h5>
</div>
<div class="entry">
<h4 class="title"><a href="..." title="...">entry title</a></h4>
<h5 class="meta"><b>entry date</b></h5>
</div>
</div>

Here, I skipped the h2, but other than that everything’s in a descending-order hierarchy.  In this case, such an approach was a fairly natural fit to the information being presented.  I don’t have an h2 on the home page because it’s reserved for page titles, such as the “Eric A. Meyer” at the top of my personal page.  The page title of the home page would be “Home Page” or “Main Page” or something equally silly, so I left it off.  (And yes, there are b elements in my markup, and I’m doing that on purpose.  I’ll talk about it some other time.)

Meanwhile, in the sidebar, the section headings (such as “Distractions”) are enclosed in h4 elements.  That makes them as important as entry headings, and I’m okay with that.  I might have chosen h5 instead, but probably not h3.  The one exception is “Blog Watch”, which is an h5 element.  I don’t have any rigorous logical explanation for this; it’s all based on gut feelings.

In the end, I don’t think there will ever be a heading consensus.  XHTML 2.0 attempts to bridge the gap by adding the generic h (heading) element, which helps create a descending-order hierarchy when combined with nested section elements.  What bothers me is that the Working Group hasn’t decided whether or not h1 through h6 should be deprecated; to do so would in effect attempt to force a consensus in favor of a descending-order hierarchy.  That would mean that headings were no longer as much about importance as they were about having a document that’s structured like a specification, or maybe an e-book.

In the here and now, though, we’re left to ponder these questions and arrive at our own conclusions.  Each answer will depend on the temperament of the ponderer, and the project upon which the pondering is predicated.


Scorning Standards

Published 20 years, 4 months past

So in the last week, we had relaunches of Feedster and allmusic.com, and both sites were straight out of the Nineties: “this site best viewed on…”, browser blockers, and general lack of standards awareness.  Scott Johnson’s response in the case of Feedster is, in effect, “we don’t have the resources to support all browsers”.

Yes, you do.  It actually costs less to support all browsers.

What costs more is obsessing over making a design “look the same in all browsers”, which is in any case impossible.  Your site can’t possibly look the same on a cell phone as it does on my Cinema Display, and it’s not going to look the same in Mosaic 1.0 as it does in IE/Win.  Remember Mosaic?  It didn’t support tables.  A table-driven layout will completely and totally shatter in Mosaic.  I wonder if Feedster has a blocking message for Mosaic.

The point is that if you properly structure your content, then you can make it available to everyone.  You can set things up so that in more current browsers, the site will look pretty.  In older browsers, it won’t.  If the user really wants to get your content but your styles confuse it, then the user can disable styles (all the older browsers, and many newer ones, let you do that via the preferences).  If you identify a particularly problematic browser—whether it’s IE5/Mac or Netscape 4 or Opera 3.6 or whatever—then you can use JS to withhold the CSS from the browser.  Users of those browsers get the content.  You can throw in a message telling them why the site looks plain, if you like, but the important thing is that they get the content.

For a site like Feedster, there’s really no excuse.  The main page is a search form that looks a whole lot like Google, except with more stuff on it.  After that, you get a list of search results.  The results will be just as useful with an unstyled presentation as with all the CSS in the world applied.  So to say that it would cost $1,500 to support IE/Mac, or anything else, is misleading at best.  It might cost $1,500 to figure out how to hack around a browser’s limitations in order to make the page “look the same”.  It would have cost $750 less to not take half an hour to implement a browser blocker and set up the blocker page, and just let all browsers in.  It would maybe have taken $275 worth of time to write a detector that withholds the style sheet from “unsupported” browsers, or else adds in a style sheet for the browsers you “support”.

As for allmusic.com, Tim Murtaugh created a more standards-compliant version of the main page in two hours.  Of course, it may not have consistent layout in multiple browsers, but another six hours could probably fix that.  I wish they would, because I use allmusic.com a lot in preparing for my radio show.  (And did I mention that the station has a new design for its site?  I had nothing to do with it.)  I won’t stop using it, of course, because they have good biographical information. but I wish they’d done better.  It would have been little enough effort to do so.


Browse the Archive

Earlier Entries

Later Entries