Posts in the Browsers Category

When Printing Kills

Published 20 years, 1 week past

Here’s a fascinating little tidbit: on some users’ machines, attempts to print out Joe Clark‘s ALA article “Facts and Opinions About PDF Accessibility” would crash Internet Explorer.  The error message mentioned a script error in line 1401: “Object doesn’t support this property or method”.  Funny thing: we weren’t doing any scripting.  The error was actually occurring shdoclc.dll/preview.dlg, which is of course a piece of the operating system.

Jason did some sleuthing and traced the crash to this line of markup:

<h2 id="tags">Tags and structure</h2>

Honestly, that was it.  So Jeffrey renamed the ID to read:

<h2 id="structure">Tags and structure</h2>

So far as we know, no more crashing in Explorer.

Ain’t browsers a slice?

(And yes, we’re aware of the clamor for a print style sheet.  More on this later.)

Update: Marten Veldthuis from Strongpsace points out that 37signals ran into a very similar problem in Backpack.  Details can be found in Jamis Buck‘s June 3rd post ie-is-teh-3v1lSpread the word: “tags” is effectively a reserved keyword, even though no such concept exists in (X)HTML.  Use it at your (users’) peril.


Web Page, Mutated

Published 20 years, 2 months past

One of the first rules of life is that first-hand information is always better than second-hand information.  You can be more certain of something if you’ve seen it with your own eyes.  Anything else is hearsay, rumor, conjecture—an article of faith, if you will.  At the very minimum, you have to have faith that your source is reliable.  The problems begin when sources aren’t reliable.

No, this isn’t a rant about the intelligence screw-ups previous to the invasion of Iraq.  Instead, it’s a warning that inspector programs and saving as “Web page, complete” features can lead you astray.

One such example came up recently, shortly after I mentioned the launch of the new Technorati design.  A question came in:

I did want to ask about the use of -x-background-{x,y}-position as opposed to background-position. If I understand correctly, the -x prefix indicates an experimental CSS attribute, so in what circumstances should one use this sort of experimental attribute instead of an official one?

I’d have been glad to answer the question, if only I’d known what the heck he was talking about.  Those certainly weren’t properties I’d added to the style sheets.  They weren’t even properties I’d ever heard of, proprietary or otherwise.

Just to be sure, I loaded the CSS files found on the Technorati site into my browser and searched them for the reported properties.  No results.  I inquired as to where the reporter had seen them, and it turned out they were showing up in Firefox’s DOM Inspector.

Now, the DOM Inspector is an incredibly useful tool.  You can use it to look at the document tree after scripts have run and dynamically added content.  You can get the absolute (that is, root-relative) X and Y coordinates of the top left corner of every element, as well as its computed dimensions in pixels.  You can see the CSS rules that apply to a given element… not just the everyday CSS properties, but the stuff that the Gecko engine maintains internally.

That’s where the problem had come in.  The DOM Inspector was showing special property names, splitting the background-position values into two different pseudo-properties, and not showing the actual background-position declaration.  This, to me, is a flaw in the Inspector.  It should do two things differently:

  1. It should show the declaration found in the style sheet.  There should be a line that shows background-position and bottom left (or whatever), because that’s what the style sheet contains.
  2. It should present the internally-computed information differently than the stuff actually taken from the style sheet.  One possibility would be to show any internal property/value pair as gray italicized text.  I’d also like an option to suppress display of the internal information, so that all I see is what the style sheet contains.

The person who asked why I was using those properties wasn’t stupid.  He was just unaware that his tool was giving him a distorted picture of the style sheet’s contents.

Don’t think Firefox is the only culprit in unreliable reporting, though.  Anyone who uses Internet Explorer’s save as “Web page, complete” feature to create a local copy for testing purposes isn’t getting an actual copy.  Instead of receiving local mirrors of the files found on the Web server, they’re getting a dump from the browser’s internals.  So an external style sheet will actually be what the browser computed, not what the author wrote.  For example, this:

body {margin: 0; padding: 0;
  background: white url(bodybg.gif) 0 0 no-repeat; color: black;
  font: small Verdana, Arial, sans-serif;}

…becomes this:

BODY {
	PADDING-RIGHT: 0px; PADDING-LEFT: 0px;
BACKGROUND: url(bodybg.gif) white no-repeat 0px 0px;
PADDING-BOTTOM: 0px; MARGIN: 0px; FONT: small Verdana, Arial, sans-serif;
COLOR: black; PADDING-TOP: 0px
}

Okay, so it destroys the authoring style, but it isn’t like it actually breaks anything, right?  Wrong.  For some reason, despite IE treating the universal selector correctly, any rule that employs a universal selector will lose the universal selector when it’s saved as “Web page, complete”.  Thus, this:

#sidebar {margin: 0 74% 3em 35px; padding: 0;}
#sidebar * {margin: 0; padding: 0;}

…becomes this:

#sidebar {
	PADDING-RIGHT: 0px; PADDING-LEFT: 0px; PADDING-BOTTOM: 0px;
MARGIN: 0px 74% 3em 35px; PADDING-TOP: 0px
}
#sidebar  {
	PADDING-RIGHT: 0px; PADDING-LEFT: 0px; PADDING-BOTTOM: 0px;
MARGIN: 0px; PADDING-TOP: 0px
}

Oops.  Not only can this mean the local copy renders very differently as compared to the “live” version, it’s also very confusing for anyone who’s saved the page in order to learn from it.  Why in the world would anyone write two rules in a row with the same selector?  Answer: nobody would.  Your tool simply fooled you into thinking that someone did.

Incidentally, if you want to see the IE-mangled examples I showed in a real live set of files on your hard drive, go save as “Web page, complete” the home page of Complex Spiral Consulting using IE/Win.  And from now on, I’ll always put “Web page, complete” in quotes because it’s an inaccurate label.  It should really say that IE will save as “Web page, mutated”.

So if you’re Inspecting a page, or viewing a saved copy, remember this:  nothing beats seeing the original, actual source with your own eyes.  If you see something odd in your local copy, your first step should be to go to the original source and make sure the oddness is really there, and not an artifact of your tools.


Don’t Read; Speak!

Published 20 years, 2 months past

With the debut of the WSP‘s ATF, a vigorous conversation has gotten underway.  Joe Clark weighed in with some suggestions, Andy Clarke got some rousing comment action, and more have spoken up.  This follows some recent and widely-cited thoughts from Matt May on WCAG 2.0 (with opposing view from Gez Lemon), and from Andy Clarke regarding accessibility and legislation (which inspired the publication of a different view from Andy Budd, not to mention another from Chris Kaminski).  I’ll join the chorus with some points of my own.  (Apparently, my recent post Liberal vs. Conservative was taken as a contribution to the discussion, which it wasn’t meant to be, although the points raised there are definitely worth considering in this context.)

This past May, I delivered a keynote at the 2nd International Cross-Disciplinary Workshop on Web Accessibility in Tokyo, and one of the major points I made was basically this: “Screen readers are broken as designed, and need to become speaking browsers”.

The problem is that screen readers are just that: they read what’s displayed on the screen for a sighted user.  In other words, they let Internet Explorer render the Web page, scrape the visual result, and read that.  I will acknowledge that in the tables-and-spacers era of design, this made a certain amount of sense.  That era is ending; in an important sense, it’s already over and we’re just cleaning up the mess it left.  Which is not to say that table markup is never and should not presently be used for layout purposes, nor is this to say that such markup should be used.  Okay?

What I’m saying is that screen readers need to become speaking browsers: they need to ignore how the page is visually displayed, and read the content.  Use semantic markup when it exists, and otherwise ignore the markup in favor of the actual words, whether it’s plain text or alt text.  Go from the beginning of the document to the end of the document, and ignore the CSS—at least that CSS which is meant for visual media, which these days is pretty much all of it.

You might wonder how a speaking browser should deal with a table-driven site, of which there are still quite a few, he said with some understatement.  One distinct possibility is to do what I just said: ignore the non-semantic markup and read the content.  I can accept that might fail in many cases, so I’ll present a fallback: DOCTYPE switching.  If a document has a DOCTYPE that would put a visual browser into standards mode, then be a speaking browser.  If not, then be a screen reader.

DOCTYPE switching has been, despite a few hiccups, incredibly successful in helping designers move toward standards, and allowing browsers to permit standards-based design without sacrificing every page that’s come before.  The same, or at least a very similar, mechanism could help audible-Web tools.

The WaSP has done great things in their efforts to show vendors why Web design tools should produce standards-oriented markup and CSS.  I sincerely hope they can produce similar results with audible-Web vendors.


Increasing the Strength of Ajax

Published 20 years, 2 months past

There’s been some comment recently about how Ajax programming requires a different approach to UI and user notification.  What Jeff Veen wrote about Designing for the Subtlety of Ajax, and Alex Bosworth‘s post on the top 10 Ajax Mistakes, are just two examples.

I pretty much agree with both pieces.  I’ve missed upates more than once on Ajax pages, just because I’m too used to how pages usually work.  I’ll click on something and then my attention will, out of habit, instantly go elsewhere—another window, another application, another computer, whatever—and keep subconscious track of what was happening in the window where I clicked, monitoring it in my peripheral vision for the flicker of a page reload.  Eventually there will be a little tickle in the back of my brain that says, “Hey, didn’t that site ever do anything?”  When I finally look straight at it, I realize that it did something quite a while ago, probably a split second after my mental focus moved away.  Instead of being efficient, I was wasting time waiting for a refresh that never came.

One might think it’s time for an “Ajax enabled” badge on pages so we know “better pay attention, ’cause this ain’t your father’s Web page”.  I don’t think that’s the way to go, however.  I think what’s needed is a more mature HCI design sense.  Web design has long relied on the page-update refresh to tell the user something has happened; this was such a part of the Web’s fabric that designing around it was almost unconscious.  There hasn’t been a need for sophisticated HCI considerations… until now.

In other words, Web design is going to need to grow up, and become more HCI-oriented than it has been.  The usability of a Web site will become as much about how you let the user know they’ve done something as it is about getting them to the thing they want to do.  In addition to getting the page to look inviting and present the information well, it will be necessary to obsess over the small details, implement highlights and animations and pointers—not to wow the user, but to help them.

In this endeavor, it’s worth remembering that there is a very large and long-standing body of research on HCI.  For years, many HCI experts have complained that the Web design field is making all sorts of errors that could be avoided if we’d just pay attention to what they’re telling us—a criticism which was not totally inaccurate.  Some Web design experts shot back that the Web was a different medium than the sorts of things HCI people studied, and anyway, the Web was not an application—and that rejoinder was also not totally inaccurate.  But with Ajax, the Web-application dichotomy is disappearing.  The retort is becoming less accurate, and the criticism more accurate.

I don’t claim to know what should be done.  The simplest update notification would be to set the visibility of the body element to hidden for half a second, and then back to visible, thus visually simulating a page refresh.  Crude, but it would play directly to users’ expectations.  The fading yellow highlight in Basecamp gets a lot of attention (and imitators), and that’s a good way to go too.  We could envision tossing a red outline onto something that changed, or animating a target-reticle effect on the updated content, or any number of other ideas.  Again I say: the decades of work done in HCI research are a resource we should not ignore.

From my perspective, there are at least two good things in the Ajax world.  First is that the need for understanding and using CSS, XHTML, and the DOM has never been greater.  Okay, it’s a slightly selfish thing, but it leads directly into the second good thing: that the need for standards support has never been more critical.  If a browser wants to play in the Ajax space—if it wants to be a serious platform for delivering applications—then it’s going to have to get along with the others.  It’s going to have to support the standards.


Liberal vs. Conservative

Published 20 years, 2 months past

So it turns out that crackers can mess up your Web site with nothing more than a malformed HTTP packet.  You might think something as simple as HTTP would be basically risk-free, but no, I’m afraid not.  All it takes is interaction between programs that handle HTTP data slightly differently, and hey presto, you’ve got a security hole.

Ben Laurie weighed in on this:

“It is interesting that being liberal in what you accept is the base cause of this misbehaviour,” Laurie says. “Perhaps it is time the idea was revisited.”

That’s a reference to the late Jon Postel‘s dictum (from RFC 793) of “be conservative in what you do, be liberal in what you accept from others”.  This is done in the name of robustness: if you’re liberal in what you accept, you can recover from data corruption caused by unanticipated problems.

Laurie’s right.  The problem is that being liberal in what you accept inevitably leads to a systemic corruption.  Look at the display layer of the Web.  For years, browsers have been liberal in what markup they accept.  What did it get us?  Tag soup.  The minute browsers allowed authors to be lazy, authors were lazy.  The tools written to help authors encoded that laziness.  Browsers had to make sure they could deal with even more laziness, and the tools kept up.  Just to get CSS out of that death spiral, we (as a field) had to invent, implement, and explain DOCTYPE switching.

In XML, it’s defined that a user agent must throw an error on malformed markup and stop.  No error recovery attempts, just a big old “this is broken” message.  Gecko already does this, if you get it into full-on XML mode.  It won’t do it on HTML and XHTML served as text/html, though, because too many Web pages would just break.  If you serve up XHTML as application/xml+xhtml, and it’s malformed, you’ll be treated to an error message.  Period.

And would that be so bad, even for HTML?  After all, if IE did it, you can be sure that people would fix their markup.  If browsers had done it from the beginning, markup would not have been malformed in the first place.  (Weird and abnormal, perhaps, but not actually malformed.)  Håkon said five years ago that “be liberal in what you accept” is what broke the Web, markup- and style-wise.  It’s been a longer fight than that to start lifting it out of that morass, and the job isn’t done.

Authors of feed aggregators have similar dilemmas.  If someone subscribes to a feed, thus indicating their interest in it, and the feed is malformed, what do you do?  Do you undertake error recovery in an attempt to give the user what they want, or do you just throw up an error message?  If you go the error route, what happens when a competitor does the error recovery, and thus gets a reputation as being a better program, even though you know it’s actually worse?  That righteous knowledge won’t pay the heating bills, come winter.

“So what?” you may shrug.  “It’s not like RSS feeds can be used to breach security”.

Which is just what anyone would have said about HTTP, until very recently.

In the end, the real problem is that liberal acceptance of data will always be used.  Even if every single HTTP implementor in the world got together and made sure all their implementations did exactly the same strictly correct conservatively defined thing, there would still be people sending out malformed data.  They’d be crackers, script kiddies—the people who have incentive to not be conservative in what they send.  The only way to stop them from sending out that malformed data is to be conservative in what your program accepts.

Even then, it might be possible to exploit loopholes, but at least they’d be flaws in the protocol itself.  Finding and fixing those is important.  Attempting to cope with the twisted landscape of bizarrely interacting error-recovery routines is a fool’s errand at best.  Unfortunately, it’s an errand we’re all running.


That Acid Buzz

Published 20 years, 5 months past

Just a few days after Chris Wilson’s post to IEblog, Håkon Wium Lie, CTO of Opera and one of the originators of CSS, published a column on news.com titled “The Acid2 Challenge to Microsoft“, outlining the intent to create “a test page… that will actively use features Web designers crave, such as fixed positioning of elements”.  As indicated in his article and confirmed via the Buzz, the Web Standards Project  is a partner with Håkon in the development of this new “test suite”, as it’s termed on the WaSP Buzz.

I don’t know about you, but as I read the article, several red flags went up and alarm bells rang in my head.

First off, the Acid2 challenge to Microsoft?  Why only Microsoft?  An acid test worthy of its name would expose bugs in every browser on the market today.  The original test did exactly that, and helped change the face of the Web.  In fact, if you’re still using IE/Mac, the first browser to actually get the Acid test correct, you can see it in action.  Type about:tasman into the address bar, and there it is, with modified text.

Second, the original Acid test (which I haven’t linked to because it seems to longer be available on the Web) was part of a larger and more constructive effort.  At the time, Acid test author Todd Fahrner was (as was I) a member of the WaSP’s CSS Action Committee.  If that name doesn’t sound familiar, you might be more familiar with the CSS Samurai.  One of the things the CSS AC did was to produce reports on the top ten failings of CSS support in various browsers.  We didn’t just say, “Browser X should be better”.  We wrote up what should be better, and why, and pointed to test cases illustrating the problems.  The Acid test was justifiably famous, but it was in the end one test among many.

And those tests were tough for all browsers, not just one browser or one campany’s browsers.  We weren’t partisan snipers, despite what many claimed.  We worked to point the way toward better behavior on the part of all browsers by focusing on the problems specific to each browser.

I am no longer a member of the WaSP.  When the first incarnation of the organization went into dormancy, the CSS AC went along with it.  Although the new WaSP has asked me to join a few times, I have resisted for various reasons—personal, professional, and perceptual.  I was also asked if I wanted to contribute to the Acid2 effort as an independent, and declined that as well.  So in many ways, this post is the epitome of something I find distasteful: a person who has had every chance to make contributions, and instead criticizes.  In my defense, I can say only that while I may have refused these invitations, it is not out of antagonism to the basic goal of the WaSP.  I have every reason to want the WaSP to succeed in its goal of advancing the cause of standards on the Web.

But this Microsoft-centric campaign has me concerned, and ever so slightly appalled.  The creation of a tough CSS test suite is a fantastic undertaking, something that is to be applauded and is probably long overdue.  But to cast it as an effort being undertaken as a challenge to Microsoft not only starts it off on the wrong foot, it has the potential to taint not just the Acid2 effort, but the entire organization.


Exploring Better Standards Support

Published 20 years, 5 months past

While I was preparing for SXSW, Chris Wilson—and there’s a name that takes me back a few years—posted an entry on IEblog about standards.  I’m not going to excerpt any of it here because most of you have already read it.  For the rest of you, go read it.  As long as you don’t continue into the comments, it won’t take very long.

First off, let me say that I’ve known Chris for many years, and we get along great together.  I have a lot of respect for him, and I firmly believe the feeling is mutual.  He did incredible work in the very early years of CSS, and while some of that work may seem lacking when viewed in light of later implementations, it was that all-important first step on the journey of a thousand miles.  If I ever make it to Seattle with a couple of days to spare, he’s right near the top of a pretty short list of people I’d do my utmost to find time to see while I was there.  (I just added another person to that list a couple of days ago, actually, but that’s a story for another time.)

With a paragraph like that, you probably think I’m going to tear into him now.  Nope.

I’m posting my thoughts on this for three reasons.

  1. Chris was nice enough to name-check meyerweb as a site that’s helped “[harvest] the collective knowledge of the web development community” with regard to standards.  If nominations were being taken, I’d point to the css-discuss wiki before I would meyerweb, but nevertheless I’d like to think I’ve earned a place on that list—and I’m glad that Chris thinks so too.
  2. Some of my writing from the post “Unbreaking the Web” was quoted in a comment by Thomas Tallyce.
  3. The 800-pound gorilla is stirring.  It’s hard not to share a few observations.

So as Chris points out, the IE team faces an enormous challenge.  This is compounded by the enormous loss of IE developers over the past few years.  Think about it.  Would you work on a project that was the legal and administrative equivalent of a toxic cloud?  Internet Explorer is the focal point of dozens of lawsuits, antitrust litigation, and more.  It’s a project straitjacketed by its own success (however rightly or wrongly that success was achieved).  I don’t have any direct knowledge of this, but the IE team has probably become the Marie Celeste of Microsoft, a doomed wanderer of the bureaucratic seas, staffed by a few trapped souls and the subject of whispered tales of horror among the engineers.

( “And there… dangling from the door handle… was a scripting hook!” )

Despite this recent legacy of pariahship, it would seem that resources are gathering behind Explorer, and not just on the security front.  Chris says, and I have no reason to doubt him, that plans are afoot to add standards support to Explorer.  My concern is over the fate of those plans, because the best-laid plans… you know.  No matter how much the engineering team might want something, if their irresistible geek force encounters an immovable administrative object, well, my money’s on the object.  The only hope is to interpret the object as damage and route around it, which is usually a lot harder to accomplish in a bureaucracy than it is in a network topology.

Chris’ post makes it very clear that backwards compatibility will not be sacrificed, at least in quirks mode.  I wrote some thoughts along those lines in “Unbreaking the Web“, so I won’t repeat them here.  In summary: improving standards support will not break the Web.  It won’t even break the vast majority of sites, and any sites that do break will get sorted out in short order.  With a public beta, those problems could be identified and solved well before the browser went final.  Backwards compatibility is no longer a reasonable excuse for avoiding standards support.

And then there are the resource limitations.  It’s hard to think of anything Microsoft does as lacking in resources, but just as there are hungry people in America, there are starving programs within Microsoft.  I believe that, for some time now, the IE project has been living on a sub-subsistence diet.  It will probably be hard to attract people to help feed it.  The staffing requirements for regression testing alone would be daunting.  I don’t envy the IE managers their task—all the more so because no matter what they do, it won’t be enough for some people.  They’re going to get slammed.  Their only real choice is in trying to pick the things for which they’ll be slammed less.  If improving standards support in IE isn’t a corporate (or divisional) priority, they’re in for a world of hurt.  Which is why I sincerely hope they’re a priority.

But neither is that a complete excuse.  Working for a firm like Microsoft means taking on massive challenges, doing more than you thought possible with less than you should have available, pulling long hours and pounding your head against a wall in order to do the apparently impossible.  That’s part of the job description, and being there is pretty much a matter of choice.  I say this isn’t a complete excuse because, obviously, any given team can only accomplish so much.  It just isn’t a “get out of jail free” card.  If you’re going to tell us that standards are important and that support will be improved, it has to be a notable degree.  There has to be evidence that a lot of work went into adding a lot of useful things, and fixing a lot of old problems.  Again, this is because the promise was freely made, not because it’s what the Web Gods demand.

We all, and by that I mean “us Web designers and developers”, need to stay involved in this conversation.  It’s easy to post a few thoughts, assume that they’ve been ignored, and let things drift.  It’s also easy to assume that the entire IE team read your ideas and immediately agreed to every single last one of them because they’re so blindingly obviously critical, and then get completely enraged when they don’t show up in the final product.  I for one plan to keep an eye on this situation, and to think about ways I could help the IE team.

Because if I truly care about standards—and I do—then I owe the IE team as much as I’ve given to the teams working on Firefox, Safari, Opera, and all the rest.  We all do.  Whatever we would have done for the least of these our browsers in the name of advancing standards support, we owe the Explorer team no less.

Chris did ask for specific requests, so here are my top ten CSS requests in priority order:

  1. Support all selectors—including CSS3 selectors, which I believe are stable enough to be implemented
  2. Clean up positioning and add fixed positioning
  3. Clean up floating/clearing
  4. min-/max-width/height (got that?)
  5. Fix problems with inline layout, especially the handling of top and bottom padding and margins on inline elements
  6. Arbitrary-element hover
  7. Focus styling for form elements
  8. Better printing support, including better page-break control and page orientation
  9. Support CSS table styling, including the table-centered display values
  10. Support generated content

…plus the unranked but still very important “fix bugs! fix bugs! fix bugs!”.

Did I miss anything important, or under- or over-value anything, on that CSS list?  Let us know.


Tabular Weirdness

Published 20 years, 7 months past

Recently I was doing some table styling for a client and ran into what I can only call tabular weirdness.  There were two different things that I stumbled across, and interestingly, they were the kinds of problems you wouldn’t be likely to encounter in layout tables.  These would come up much more often in data tables.

In the first case, the general idea was to put some space between the tables and the surrounding material, but as these were data tables, they came with captions.  So I of course put the caption text in caption elements.  That’s when things started to get inconsistent.

To be more precise, the problems began after I left Safari to check the page in other browsers. In Safari, you see, the caption’s element box is basically made a part of the table box.  It sits, effectively, between the top table border and the top margin.  That allows the caption’s width to inherently match the width of the table itself, and causes any top margin given to the table to sit above the caption.  Makes sense, right?  It certainly did to me.

However, according to section 17.4 of CSS2.1 and the figure that accompanies it, the caption sits entirely outside the table’s box, and that includes the table’s margin.  The two are still tied together by the generation of an anonymous box, but the upshot is that if you give the table left and right margins, then the caption does not follow suit.  If you give the table a top margin, it pushes the caption away from the table. This is the behavior evinced by Firefox 1.0, and as unintuitive as it might be, it’s what the specification demands.

The third piece of strangeness was found in IE/Win.  What I’d done was simply said that some cell borders should be solid—nothing more complicated than border-bottom: 1px solid.  The idea was that it would, as borders do, pick up the foreground color of the cell, but IE/Win had other ideas.  As best I could tell, the borders were a light gray.  You can see it happen in the testcase I constructed to create the images in this entry.  Explicitly specifying a border color fixes the problem, of course, but it was a bit of weirdness I thought I’d pass along in case anyone runs into the same thing.


Browse the Archive

Earlier Entries

Later Entries