meyerweb.com

Skip to: site navigation/presentation
Skip to: Thoughts From Eric

Archive: 'Browsers' Category

line-height: abnormal

When I first wrote Cascading Style Sheets: The Definitive Guide, the part that caused me the most difficulty and headaches was the line layout material.  Several times I was sure I had it all figured out and accurately described, only to find out I was wrong.  For two weeks I corresponded with Ian Hickson and David Baron, arguing for my understanding of things and having them show me, in merciless detail, how I was wrong.  I doubt that I will ever stop owing them for their dedication to getting me through the wilderness of my own misunderstandings.

Later on, I produced a terse description of line layout which went through a protracted vetting process with the CSS Working Group and the members of www-style.  At the time it was published, there was no more detailed and accurate description of line layout available.  Even at that, corrections trickled in over the years, which made me think of it as my own tiny little The Art of Computer Programming.  Only without the small monetary reward for finding errors.

The point here is that line layout is very difficult to truly understand—even given everything I just said, I’m still not convinced that I do—and that there are often surprises lurking for anyone who goes looking into the far corners of how it happens.  As I’ve said before, my knowledge of what goes into the layout of lines of text imparts a sense of astonishment that any page can be successfully displayed in less than the projected age of the universe.

Why bring all this up?  Because I went and poked line-height: normal with a stick, and found it to be both squamous and rugose.  As with all driven to such madness, I now seek, grinning wildly, to infect others.

Here’s the punchline: the effects of declaring line-height: normal not only vary from browser to browser, which I had expected—in fact, quantifying those differences was the whole point—but they also vary from one font face to another, and can also vary within a given face.

I did not expect that.  At least, not consciously.

My work, let me show it to you: a JavaScript-driven test file where you can pick from a list of fonts and see what happens at a variety of sizes.  (Yes, the JS is completely obtrusive; and yes, the JS is the square of amateur hour.  Let’s move on, please.  I’m perfectly happy to replace what’s there with unobtrusive and sharper JS, as long as the basic point of the page, which is testing line-height: normal, is not compromised.  Again, moving on.)

When you first go to the test, you should (I hope) see a bunch of rulered boxes containing text using the very common font face Webdings, set at a bunch of different font sizes.  The table shows you how tall the simple line boxes are at each size, and therefore the numeric equivalent for line-height: normal at those sizes.  So if a line box is using font-size: 50px and the line box is 55 pixels tall, the numeric equivalent for line-height: normal is 1.1 (55 divided by 50).

On my PowerBook, Webdings always yields a 1:1 ratio between the font-size and line box height.  The ten-pixel font size yields a ten-pixel-tall line box, and so on.

This is actually a little surprising by itself.  The CSS 2.1 specification says:

normal
Tells user agents to set the used value to a “reasonable” value based on the font of the element. The value has the same meaning as <number>. We recommend a used value for ‘normal’ between 1.0 to 1.2. The computed value is ‘normal’.

This is basically what CSS has said since its first days (see the equivalent text in CSS1 or in CSS2 for confirmation) and there’s always been a widespread assumption that, since 1.0 is probably too crowded, something around 1.2 is much more likely.

So finding a value of 1 was a surprise.  It was an even bigger surprise to me that this held true in Camino 1.5.2, Firefox 2.0.0.14, and Safari 2.0.4, all on OS X.  Firefox 3b5 didn’t render Webdings at all, so I don’t know if it would do the same.  I actually suspect not, for reasons best left for another time (and, possibly, a final release of Firefox 3).

Various browsers doing the same thing in an under-specified area of the spec?  That can’t be right.  It’s pretty much an article of faith that given the chance to do anything differently, browsers will.  The sailing was so unexpectedly smooth that I immediately assumed was that a storm lurked just over the horizon.

Well, I was right.  All I had to do was start picking other font faces.

To start, I picked the next font on the list, Times New Roman, and the equivalent values for normal immediately changed.  In other words, the numeric equivalents for Times New Roman are different than those for Webdings.  The browsers weren’t maintaining a specific value for normal, but were altering it on a per-face basis.

Now, this is legal, given the way normal is under-specified.  There’s room to allow for this behavior.  It’s actually, once you think about it, a fairly good thing from a visual point of view: the best default line height for Times New Roman is probably not the best default line height for Courier New.  So while I was initially surprised, I got over it quickly.  The seemingly obvious conclusion was that browsers were actually respecting the fonts’ built-in metrics.  This was reinforced when I found that the results were exactly the same from browser to browser.

Then I looked more closely at the numbers, and confusion set back in.  For Times New Roman, I was getting values of 1.1, 1.12, 1.16, 1.15, 1.149, and 1.1499.  If you were to round all of those numbers to two decimal points, you’d get 1.10, 1.12, 1.16, 1.15, 1.15, 1.15.  If you round them all to one decimal place, you’d get 1.1, 1.1, 1.2, 1.2, 1.1, 1.1.  They’re inconsistent.

But wait, I thought, I’m trying to compare numbers I derived by dividing pixels by pixels.  Let’s turn it around.  If I multiply the most precise measurement I’ve gotten by the various font sizes, I get… carry the two… 11.499, 28.7475, 57.495, 114.99, 1149.9, 11499.  As compared to the actual values I got, which were 11, 28, 58, 115, 1149, and 11499.

Which means the results were inappropriately rounded up in some cases and down in others.  28.7475 became 28 and 1149.9 became 1149, whereas 57.495 became 58.  Even though 11.499 became 11 and 114.99 became 115.

This was consistent across all the browsers I was testing.  So again, I was suspecting the fonts themselves.

And then I switched from Times New Roman to just plain old Times, and the storm was full upon me.  I’ll give you the results in a table.

Derived normal equivalents for Times in OS X browsers
font-size Camino 1.5.2 Firefox 2.0.0.14 Safari 2.0.4
10 1 1.2 1.3
25 1 1 1.16
50 1 1 1.18
100 1 1 1.15
1000 1 1 1.15
10000 1 1 1.15

Much the same happened when comparing Courier New with plain old Courier: full consistency on Courier New between browsers, albeit with the same strange (non-)rounding effects as seen with Times New Roman; but inconsistency between browsers on plain Courier—with Camino yielding a flat 1 down the line, Firefox going from 1.2 to 1, and Safari having a range of values above the others’ values.

Squamous!  Not to mention rugose!

Now it’s time for the stunning conclusion that derives from all this information, which is: not here.  Sorry.  So far all I have are observations.  I may turn all this into a summary page which shows the results for all the font faces across multiple browsers and platforms, but first I’ll need to get those numbers.

I do have a few speculations, though:

  1. Firefox’s inconsistency within font faces (see Times and Courier, above) may come from face substitution.  That’s when a browser doesn’t have a given character in a given face, so it looks for a substitute in another face.  If Firefox thinks it doesn’t have 10-pixel Times, it might substitute 10-pixel something else serif-ish, and that face has different line height characteristics than Times.  I don’t know what that other face might be, since it’s not Times New Roman or Georgia, but this is one possibility.  It is not the minimum font size setting in the preferences, as I’ve triple-checked to make sure I have that set to “None”.

  2. Another possibility for Firefox’s line height weirdness is a shift from subpixel font rendering to pixelly font rendering.  10-pixel text in Firefox is distinctly pixelly compared to the other browsers I tested, while sizes above there are nice and smooth.  Why this would drive up the line height by two pixels (20%), though, is not clear to me.

  3. Much of what I’ve observed will likely be laid to rest at the doorsteps of the font faces themselves.  I’d like to know how it is that the rounding behaviors are so (mathematically) messed up within faces, though.  Perhaps ideal line heights are described as an equation rather than a simple ratio?

Again, this was all done in OS X; I’ll be very interested to find out what happens on Windows, Linux, and other operating systems.  Side note for the Mac Opera fans warming up their flamethrowers: I’ve left Opera 9.27 for OS X out of this because it seems to cap font sizes at a size well below 1000, although this limit varied from one face to another.  Webdings and Courier capped at 507 pixels, whereas Courier capped at 574 pixels and Comic Sans MS stopped at 707 pixels.  I have no explanation, though doubtless someone will, but the upshot is that direct comparisons between Opera and the other browsers are impossible.  For sizes up to 100 pixels, the results were exactly consistent with Camino, if that means anything.

The one tentative conclusion I did reach is this: line-height: normal is a jumbled terrain of inconsistent behaviors, and it’s best avoided in any sort of precision layout work.  I’d already had that feeling, but at least now there’s some evidence to back up the feeling.

In any case, I doubt this is the last I’ll have to say on this particular topic.

Update 7 May 08: I’ve updated the test page with a fix from Ben Lowery so that it works in IE.  Thanks, Ben!  Now all I need is to add a way to type in any arbitrary font-family’s name, and we’ll have something everyone can use.  (Or else a way to use JavaScript to suck up the names of all the fonts installed on a machine and put them into the dropdown.  That would be cool, too.)

Acid Redux

So the feeds I read have been buzzing the past few days with running commentary of the WebKit and Opera teams’ race to be the first to hit 100/100 on Acid3, and then after that the effort to get a pixel-perfect match with the reference image.  Last I saw, Opera claimed to have gotten to 100 first but it looked like WebKit had gotten both with something publicly available, but I haven’t verified any of this for myself.  Nor do I have any particular plans to do so.

Because as lovely as it is to see that you can, in fact, get one or more browser implementation teams to jump in a precisely defined sequence through a series of cunningly (one might say sadistically) placed hoops, half of which are on fire and the other half lined with razor wire, it doesn’t strike me as the best possible use of the teams’ time and energy.

No, I don’t hate standards, though I may hate freedom (depends on who’s asking).  What I disagree with is the idea that if you cherry-pick enough obscure and difficult corners of a bunch of different specifications and mix them all together into a spicy meatball of difficulty, it constitutes a useful test of the specifications you cherry-picked.  Because the one does not automatically follow from the other.

For example, suppose I told you that WebKit had implemented just the bits of SMIL-related SVG needed to pass the test, and that in doing so they exposed a woefully incomplete SVG implementation, one that gets something like 2% pass rates on actual SMIL/SVG tests.  Laughable, right?  Yes, well.

Of course, that’s in a nightly build and they might totally support SMIL by the time the corresponding final version is released and we’ll all look back on this and laugh the carefree laugh of children in springtime.  Maybe.  The real point here is that the Acid3 test isn’t a broad-spectrum standards-support test.  It’s a showpiece, and something of a Potemkin village at that.  Which is a shame, because what’s really needed right now is exhaustive test suites for specifications– XHTML, CSS, DOM, SVG, you name it.  We’ve been seeing more of these emerge recently, but they’re not enough.  I’d have been much more firmly in the cheering section had the effort that went into Acid3 had gone into, say, an obssessively thorough DOM test suite.

I’d had this post in mind for a while now, really ever since Acid3 was released.  Then the horse race started to develop, and I told myself I really needed to get around to writing that post—and I got overtaken.  Well, that’s being busy for you.  It’s just as well I waited, really, because much of what I was going to say got covered by Mike Shaver in his piece explaining why Firefox 3 isn’t going to hit 100% on Acid3.  For example:

Ian’s Acid3, unlike its predecessors, is not about establishing a baseline of useful web capabilities. It’s quite explicitly about making browser developers jump… the Acid tests shouldn’t be fair to browsers, they should be fair to the web; they should be based on how good the web will be as a platform if all browsers conform, not about how far any given browser has to stretch to get there.

That’s no doubt more concisely and clearly stated than I would have managed, so it’s all for the best that he got to say it first.

By the by, I was quite intrigued by this part of Mike’s post:

You might ask why Mozilla’s not racking up daily gains, especially if you’re following the relevant bugs and seeing that people have produced patches for some issues that are covered by Acid3.

The most obvious reason is Firefox 3. We’re in the end-game of building what I really do believe is the best browser the web has ever known, and we expect to be putting it in the hands of more than 170 million users in a pretty short period of time. We’re still taking fixes for important issues, but virtually none of the issues on the Acid3 list are important enough for us to take at this stage. We don’t want to be rushing fixes in, or rushing out a release, only to find that we’ve broken important sites or regressed previous standards support, or worse introduced a security problem. Every API that’s exposed to content needs to be tested for compliance and security and reliability… We think these remaining late-stage patches are worth the test burden, often because they help make the web platform much more powerful, and reflect real-web compatibility and capability issues. Acid3’s contents, sadly, are not as often of that nature.

You know, it’s weird, but that seems really familiar, like I’ve heard or read something like that before.  Now if only I could remember…  Oh yeah!  It’s basically what the IE team said about not passing Acid2 when the IE7 betas came out, for which they were promptly excoriated.

Huh.

Well, never mind that now.  Of course it was a totally different set of circumstances and core motivations, and I’m sure there’s absolutely no parallel to be drawn between the two situations.  At all.

Returning to the main point here:  I’m a little bit sad, to tell the truth.  The original acid test was a prefect example of what I think makes for a good stress test.  Recall that the test’s original name, before it got shorthanded, was the “Box Model Acid Test”.  It was a test of CSS box model handling, including floats.  That’s all it was designed to do.  It did that fairly well for its time, considering it was part of a CSS1 test suite.  It didn’t try to combine box model testing with tests for PNG support, HTML parse error recovery, and DOM scripting.

To me, the ideal CSS test suite is one that has a bunch of basic property/value tests, like the ones I’ve been responsible for creating (1, 2), along with a bunch of acid tests for specific areas or concepts in that specification.  So an acidified CSS test suite would have individual acid tests for the box model, positioning, fonts, selectors, table layout, and so on.  It would not involve scripting or markup parsing (beyond what’s needed to handle selectors).  It would not use animated SVG icons.  Hell, it probably wouldn’t even use PNGs, except possibly alphaed PNGs when testing opacity and RGBA colors.  And maybe not even then.

So in a DOM test suite, you’d have one test page for each method or attribute, and then build some acid tests out of related bits (say, on an entire interface or set of closely related interfaces).  And maybe, at the end, you’d build an overarching acid test that rolled verything in the DOM spec into one fiendishly difficult test.  But it would be just about the DOM and whatever absolute minimum of other stuff you needed, like text rendering and maybe GIF support.  (Similarly, the CSS tests had to assume some basic HTML and CSS selector support, or else everything else fell down.)

And then, after all those test suites have been built up and a series of acid tests woven into them, with each one culminating in its own spec-spanning acid test, you might think about taking those end-point acid tests and slamming them all together into one super-ultra-hyper-mega acid test, something that even the xenomorphs from the Alien series would look at and say, “That’s gonna sting”.  That would be awesome.  But that’s not what we have.

I fully acknowledge that a whole lot of very clever thinking went into the construction of Acid3 (as was true of Acid2), and that a lot of very smart people have worked very hard to pass it.  Congratulations all around, really.  I just can’t help feeling like some broader and more important point has been missed.  To me, it’s kind of like meeting the general challenge of finding an economical way to loft broadband transceivers to an altitude of 25,000 feet (in order to get full coverage of large metropolitan areas while avoiding the jetstream) by daring a bunch of teams to plant a transceiver near the summit of Mount Everest—and then getting them to do it.  Progress toward the summit can be demonstrated and kudos bestowed afterward, but there’s a wider picture that seems to have been overlooked in the process.

Drugs, Bugs, and IE8

If there’s a downside to becoming a cyborg, it’s the aftermath.  I’m not talking about the dystopian corporate-state shenanigans: those are fully expected.  No, it’s the painkillers that really suck.  They basically do their job, but at the cost of mental acuity.  That is not a trade I’m happy to make.  Granted, there were some interesting physical hallucinations that came along for the ride, but that’s nowhere near enough to balance the scales.

Here’s what I mean on that last part.  At one point yesterday, lying in bed as I had been all day, I decided it was about time to straighten out my legs, which were crossed at the ankles and starting to feel a little funny.  When I sent the relevant signals to my legs, nothing really happened.  Slowly I came to realize that nothing was happening because my legs weren’t actually crossed at all.  Furthermore, it gradually dawned on me that if the sensoria I’d been getting had been correct, it would have to mean that my legs were not only crossed at the ankles, but also attached to my body backwards.

So anyway, I thought I’d write up some of my observations (thus far) regarding IE8 beta 1.  What?

I’m going to say basically the same thing I said about the first betas of IE7: test and report, but don’t fix.  That is to say, you should absolutely grab it and run it across all your own sites, and all your common destinations.  Find out what’s different, broken, or just plain strange.

But don’t start searching for workarounds.  Not yet.  Submit bug reports, yes.  Boil down the problems you hit to basic test cases and submit those, if you like.  (I do like, but I’ve got kind of a history with that sort of thing.)  Just don’t think that beta 1 represents what we’ll face in the final release.

No, I don’t have some sort of inside track; never have.  That conclusion simply seems obvious to me just by looking at how this beta acts.  For example, there’s no support at all for :first-line and :first-letter.  That’s not just a glitch.  That’s a lack of support for a CSS feature that’s been present for three major releases.  I just can’t see that omission persisting to final release.

Another problem I noticed is evident here on the home page of meyerweb.  In the sidebar, each list item has a left margin and negative text indentation, creating a classic “outdent”.  Like so:

#extra .panel li {margin-left: 1em; text-indent: -1em;}

In each of those list items is a link of some kind, usually text.  The fun part is this: the hanging outdent part of that text isn’t clickable.  So the first couple of letters of each sidebar link are inactive.  They’re colored properly, but do nothing if you try to click them.  If you click on the active part of a link, the focus outline only draws around the active part.  And, for bonus yay, scrolling the page will wipe away any outdents that are offscreen.  So as you scroll down the page, you end up with all the sidebar links having their first few letters chopped off.  Whoops.

Again, that’s something I just can’t see going unaddressed in the final release.

In both these cases, flipping IE8 back to IE7 mode makes the weirdness go away.

I’ve seen more serious problems on the wider web.  Google Maps is currently busted beyond any hope of usefulness in IE8, as many have reported.  Also, I came across a site where loading the home page just locked up IE8 completely.  I had to force-quit and relaunch.  Every time I hit that page, lockup.

Flipping to IE7 mode allowed me to browse the site without any trouble at all.

These things, taken together, have really driven something home for me: there really is a new rendering engine in there.  I don’t just mean in the sense of fixing and adding enough things that the behavior is different.  I mean that I believe there’s truly a whole new engine under the hood of IE8.  And if the Acid 2 results and public statements of the IE team are to be believed, there’s a whole new standards-based rendering engine under that hood.

That’s kind of a big deal in any event.  The last time I remember a browser with an extended release history replacing its old, creaky, grown-over-time, crap-piled-on-crap engine with (what the browser team felt was) a new, improved one was the transition from Netscape 4.5 to Netscape 6.0.  And remember how well that went?  Yee haw.

I really shouldn’t be surprised about this.  Chris Wilson, for example, used the exact words “our new layout engine” during the WaSP roundtable (transcript).  I guess I’d been assuming that was verbal shorthand for “our much-improved version of our old layout engine”.  I guess I was wrong.

So I would personally argue that this release was mislabelled.  This is not a beta release.  As far as I’m concerned, it’s an alpha, even under the kinds of old-school naming conventions I prefer.  I’m not going to go around calling it that, because that would just be unnecessarily confusing, but it’s how I’m going to think of it.

Now I’m wondering just how long it will be until final release, given the kinds of distances one usually sees between alpha and final.

Unfortunately, I just took the 6pm set of painkillers, so I’ll be wondering at about one-third speed.

Principles and Legality

I woke up this morning (duh DAAAH dah DUH) and yesterday’s announcement was the first thing on my mind.  No doubt it’ll be a recurrent topic, at least for a little while.

One of the takeaways is what this change demonstrates about the IE team:  standards is and was their preferred default.  If it weren’t, they just would have found a way to square the IE7-default behavior with the Interoperability Principles announced late last month (slightly tricky but entirely possible).  That they initially chose otherwise speaks volumes about the pressures they face internally, and their willingness to publicly change direction speaks volumes about their commitment to supporting standards.  While I’m sure community feedback informed their decision, they pretty much knew what the reaction would be from the get-go.  If that was going to be the deciding factor, they would’ve chosen differently up front.

So what drove that change?  I keep coming back to two things, both of which were explicitly mentioned in yesterday’s announcement.

The first is, perhaps obviously, the previously mentioned Interoperability Principles.  Head on over there and read Principle II, “Support for Standards”.  If that isn’t a solid foundation on which to build an internal case for change, I don’t know what is.  I’m wryly amused by the idea that the IE team used the Interoperability Principles as a way to batter their way out of the grip of those internal pressures I mentioned.  The former aikido student in me finds that very satisfying.  True, the Principles came under fire for being just another set of empty words, but it would seem that they can be used for at least some concrete good.

As for the second, there’s a phrase repeated between the two announcements that I didn’t quote yesterday because I was still pondering its meaning.  I’m still not certain about it, but having had a chance to sleep on it, my initial reading hasn’t changed, so I’m going to quote and comment on it now.  First, from the press release:

“While we do not believe there are currently any legal requirements that would dictate which rendering mode must be chosen as the default for a given browser, this step clearly removes this question as a potential legal and regulatory issue,” said Brad Smith, Microsoft senior vice president and general counsel.

And then in Dean’s IEblog post:

While we do not believe any current legal requirements would dictate which rendering mode a browser must use, this step clearly removes this question as a potential legal and regulatory issue.

Okay, so they’re on message.  And the message seems to be this: that Opera’s move to link IE development to the larger EU anti-trust investigation bore fruit.  I was highly critical of that move, and unless I’m seriously misreading what I see here, I was wrong.  I’m still no fan of the tone that was used in announcing the move, but that’s window dressing.  Results matter most.

Speaking of Opera, there’s another side to all this that I find quite interesting.  So far, the reaction to Microsoft’s announcement has been overwhelmingly positive.  The sense I’ve picked up is, “Hooray! IE will act like browsers always have, and the problem is solved!”.

But is it?  The primary objection raised by Opera and several members of the community was that version targeting is an anti-competitive move, one which will force browser makers like Opera and authors of JavaScript libraries to support an ever-increasing and complex web (sorry) of rendering-engine behaviors in the market leader.  So far as I can tell, the change in default behavior does next to nothing to address that objection.  The various versions will still be there and still invoke-able by any page author who so chooses.  Yes, the default will be better for authors, but I don’t see how things get any better for Opera, Firefox, Safari, jQuery, Prototype, et. al.

Perhaps I’ve missed something basic (“Again!” shouts the chorus).  If so, what?  If not, then why all the hosannas?

Meta-change

Now here’s something I didn’t expect to see when I woke up this morning:

Microsoft Expands Support for Web Standards: Company outlines new approach to make standards-based rendering the default mode in Internet Explorer 8, will work with Web designers and content developers to help with standards behavior transition.”

Seriously, that’s the title and subhead of Microsoft’s latest press release.

About halfway through, there’s this from Ray Ozzie:

…we have decided to give top priority to support for these new Web standards. In keeping with the commitment we made in our Interoperability Principles of being even more transparent in how we support standards in our products, we will work with content publishers to ensure they fully understand the steps we are taking and will encourage them to use this beta period to update their sites to transition to the more current Web standards supported by IE8.

See also the IEblog entry Microsoft’s Interoperability Principles and IE8, where Dean Hachamovitch says:

Microsoft recently published a set of Interoperability Principles. Thinking about IE8’s behavior with these principles in mind, interpreting web content in the most standards compliant way possible is a better thing to do.

We think that acting in accordance with principles is important, and IE8’s default is a demonstration of the interoperability principles in action.

In other words, the IE team seems to have used recent Microsoft PR efforts to their, and our, advantage.

I’m relieved and glad on the one hand, and a little worried on the other.  It’s not like the issues I discussed, or Jeffrey wrote about, have gone away.  It’s just that the way in which they’re handled by IE has shifted—which in some ways is a huge difference.

I think what worries me most is the possibility that when the public beta hits, there will be enough incompatibility problems that pushback from other constiuencies forces a change back to the original behavior.  I hope not.  I hope that what will happen is that any problems that come up will be addressed by spreading the news far and wide that there’s a simple one-line fix for those sites.

I’m glad that IE will act as browsers have always done, and default to the latest and greatest in the absence of any explicit direction to the contrary.  I’m doubly glad that the IE team is willing to do that, even knowing what they have to handle.  And I’m triply glad that the proposal was made in public ahead of time, with plenty of opportunity for debate, so that we could have a chance to weigh in and affect the browser’s behavior.

Common Bonds

A List Apart #253 brings the issue of version targeting back into the limelight with opposing-view pieces by Jeremy Keith and Jeffrey Zeldman.  (And I love the “Editor’s Choice” on this issue, J. David Eisenberg’s “‘Forgiving’ Browsers Considered Harmful“.)

I’m not going to comment on the views presented; both gentlemen do a fine job.  What I do wish to add, or perhaps to restate, is an observation about everyone interested in, and thinking or arguing about, this topic:

We all care about the same thing.

We all want to advance web standards.  We all want browsers to improve their support.  We all want better and more advanced specifications.  We all want to reduce inconsistencies.  We all want a better web.

The disagreement is over how best to get there given the situation we face now, as well as how we perceive that current situation.  A recurrent metaphor for me is that we’re a large group of pioneers trying to chart the best course through an unknown country, and there is disagreement on which route entails the least risk to the whole group.  Cross the desert or the mountains?  Traverse a swampy delta or a hilly forest?  Move through this valley or that one?

Sometimes what binds us is strong enough that the few differences seem sharper by comparison.  That shouldn’t keep us from remembering what we have in common, and the importance of that commonality.

Almost Target

I’d like to tell you a little story, if I may, from way, way back in 2002.  (The exact date is lost to the mists of time, but the year is pretty solid.)  Like a lot of stories, it’s little bit long; but unlike some stories, it’s true.

As the engineering staff at Netscape prepared a new release of Mozilla, the browser off which we branched Navigator, those of us in the Technology Evangelism/Developer Support (TEDS) team were testing it against high-ranked and partner sites.  On a few of those sites, we discovered that layouts were breaking apart.  In one case, it did so quite severely.

It didn’t take much to see that the problem was with sliced images in layout tables.  For some reason, on some sites they were getting pushed apart.  After a bit of digging, we realized the reason: the Gecko engine had updated its line-layout model to be more compliant with the CSS specification.  Now images always sat on the baseline (unless otherwise directed) and the descender space was always preserved.

This was pretty new in browserdom, because every other browser did what browsers had always done: shrink-wrapped table cells to an image if there was no other cell content.  The only problem was that behavior was wrong.  Fixing the flaws in the CSS implementation in Gecko had broken these sites’ layouts.  That is, it broke them in standards mode.  In quirks mode, Gecko rolled its behavior back to the old days and did the shrink-wrap thing.

We got in touch with the web team at one of the affected sites, a very prominent social networking site (of a sort) of the day, and explained the situation.  We already knew they couldn’t change their DOCTYPE to trigger quirks mode, because that would break other things they were doing.  We couldn’t offer them a simple CSS fix like td img {vertical-align: bottom;}, because their whole layout was in tables and that would throw off the placement of all their images, not just the sliced ones.  All we could offer was an explanation of the problem and to recommend they class all of their sliced images and use CSS to bottom them out, with assurances that this would cause no change in other browsers.

Their response was, in effect:  “No.  This is your problem.  Every other browser gets this right, and we’re not mucking around in our templates and adding classes all over just because you broke something.”

The truth, of course, was that we were actually fixing something, and every other browser got this wrong.  The truth was not relevant to our problem.  It seemed we had a choice: we could back out the improvement to our handling of the CSS specification; or we could break the site and all the other sites like it, which at the time were many.  Neither was really palatable.  And word was we could not ship without fixing this problem, whether by getting the site updated or the browser changed.  Those were the options.

Let me reiterate the situation we faced.  We:

  1. Had improved standards support in the browser, and then
  2. Found sites whose layouts broke as a result
  3. Whose developers point-blank refused to alter their sites
  4. And we had to fix the problem

We couldn’t back out the improvement; it affected all text displayed in the browser and touched too many other things.  We couldn’t make the site’s web team change anything, no matter how many times we told them this was part of the advance of web standards and better browser behavior.  Two roads diverged in a yellow web, and we could choose neither.

So we found a third way: “almost standards” mode, a companion to the usual modes of quirks and standards.  Yes, this is the reason why “almost standards” mode exists.  If I remember the internal argument properly, its existence is largely my fault; so to everyone who’s had to implement an “almost standards” mode in a non-Gecko browser in order to mirror what we did, I’m sorry.

We made “almost standards” mode apply to the DOCTYPE found on the offending site—an XHTML DOCTYPE, I should point out.  While we were at it, we rolled in IBM’s custom DTD.  They were using it make their site validate while doing all kinds of HTML-invalid stuff, and they were experiencing the same layout problem.  And lo: a third layout mode was born.  All because some sites were badly done and would not update to accommodate our improvements.  We did it so as not to break a small (but popular) portion of the web while we advanced our standards support.

(By the way, it was this very same incident that gave birth to the article “Images, Tables, and Mysterious Gaps“.)

Now take that situation and multiply it by a few orders of magnitude, and you get an idea of what the IE team faces.  It’s right where we were at Netscape: caught between our past mistakes and a site’s refusal to accommodate our desire to improve support for open standards.

Some have said that Microsoft is in a unique position to take leadership and spread the news of improved standards and updating old sites to its customers.  That’s true.  But what happens when a multi-billion dollar partner corporation refuses to update and demands, under the terms of its very large service contract and its very steep penalty clauses, that a new version of IE not break (for whatever value of “break” you like) its corporate intranet, or its public e-commerce site?  It only takes one to create a pretty large roadblock.

For all we did in publishing great content to DevEdge, proactively helping sites to update their markup and CSS and JS to work with Gecko (while not breaking in other browsers), and helping guide the improvement of standards support in Gecko, we could not overcome this obstacle.  We had to work around it.

Looking back on it now, it’s likely this experience subconsciously predisposed me to eventually accept the version targeting proposal, because in a fairly substantial way, it’s what we did to Mozilla under similar conditions.  We just did it in a much more obscure and ultimately fragile manner, tying it to certain DOCTYPEs instead of some more reliable anchor.  If we could have given that site (all those sites) an easy way to say “render like Mozilla 0.9” (or whatever) at the top of every page, or in the server headers, they might have taken it.

But had we offered and they refused, putting us back to the choice of backing out the improvements or changing the browser, would we have set things up to default to the specific, known version of Mozilla instead of the latest and greatest?  The idealist in me likes to think not.  The pragmatist in me nods yes.  What else could we have done in that circumstance?  Shipped a browser that broke a top-ten site on the theory that once it was in the wild, they’d acquiesce?  Even knowing that this would noticeably and, in a few cases, seriously degrade the browsing experience for our users?  No.  We’d have shipped without the CSS improvement, or we’d have put in the targeting with the wrong default.  We didn’t have version targeting, but we still made the same choice, only we hinged it on the DOCTYPE.

A short-term fix for a short-term problem: yes.  Yet had we not done it, how long would Netscape/Mozilla’s standards support have suffered, waiting for the day that we could add that improvement back in without breaking too many sites that too many people would notice?  Years, possibly.  So we put in a badly implemented type of version targeting, which allowed us to improve our standards support more quickly than we otherwise would have, and it has been with us for the more than half a decade since.

So maybe I’m more sympathetic to the IE predicament and their proposed solution because I’ve been there and done it already.  Not to nearly the same degree, but the dilemma seemed no less daunting for all the difference in scale.  It’s something worth keeping in mind while evaluating what I’ve said on this topic, and whatever I will say in the future.

Version Two

So yesterday was interesting.  In a whole lot of ways.

As I expected, there were some widely varied reactions (there’s a good list over at Digital Web, if you’d like to taste the rainbow) and many of them were in opposition to the whole idea.  The opposition was fine, but the tone taken by many was not.  Even though I expected some flaming, I admit I really didn’t expect the overall tone to be so vitriolic, and I found it to be profoundly depressing.  I’m not talking to everyone here, but it still needs to be said: if you feel the need to impugn the integrity or intelligence of another person to oppose an idea, you’re undercutting yourself, not your target nor the thing you oppose.  It’s the dialectical equivalent of “considered harmful“.

A number of people said things to the effect of what Roger said: “explain why we’re wrong to oppose this”.  That I cannot do.  I was hoping for opposition, because that’s the only way to really test an idea.  I’m not so arrogant as to think that I alone can account for every variable and forecast the eventual outcome.  I can only explain, as I tried to do, why I think it’s a good idea, and listen to the reasoning of those who think it’s a bad idea.

No explanation is ever complete, of course, so here’s some followup.

I suspect that a good deal of the emotional objection springs from a perception that the proposal is to require all browsers to implement targeting.  Not at all.  Please be clear on this: nobody is saying this should be required in any browser.  It can be interpreted as a requirement for authors, which is a separate issue, and I think one that’s quickly becoming negated as people work through the details.  Not necessarily negated in a way Microsoft would like, but that’s really their problem.  (See, for example, Sam Ruby’s solution.)

One very likely outcome here is that IE does this and all the other browsers don’t.  In a lot of ways, I’d be happiest with that outcome, because it would give us the opportunity to evaluate both approaches in parallel.  Personally, I think other browsers should adopt the same mechanism only if they feel they need it.  So far, they’ve indicated that they don’t.  Fair enough.  IE gets to try its hand at maintaining multiple internal versions, and everyone else can continue as usual.

In that vein, I have to admit that I don’t understand the assertions that this will make life harder on other browsers, that they’ll have to support all the various rendering modes in IE.whatever.  Can someone explain that to me, please?  I’m not saying the assertion is wrong.  I’m saying I don’t understand why that would be the end result.

At a wider level, I think a lot of people are discounting the fact that version targeting is absolutely nothing new in the standards world, let alone the web development world.  Conditional comments, CSS hacks, and the DOCTYPE switch itself are all examples of version targeting.  When I write *+html… I’m doing it because I know IE7, and IE7 alone (at least for now), will see the declarations in that rule.  I did exactly that just the other day, in fact.  There’s a whole sub-set of the current CSS corpus based on figuring out parser bugs to exploit into hacks that are used to feed rules to specific browsers or specific versions of browsers.  We’ve been doing this for years.  I mean, okay, if you’ve recently done client work where you didn’t need any form of detection at all—as in, you quite intentionally used none of the things I just listed—then you can exempt yourself from the “we” in that statement; furthermore, my hat is off to you.  Seriously.  Because I can’t remember the last time I was able to avoid using at least one or two CSS hacks in a project in order to deal with browser inconsistencies.

I’m not going to claim that these mechanisms are universally broken.  In fact, I believe exactly the opposite: they’ve long since proven their utility in an imperfect world.  (The perfect world being the one where all browsers implement all standards correctly and no form of browser detection is ever needed.)  That alone made me reconsider the targeting proposal in a whole new light.

The handling of JavaScript libraries in a world where the pages calling the libraries will determine how the JS is interpreted—that’s definitely something I hadn’t considered.  As I understand it, the problem case is one where a JS library that uses (say) IE9 features is loaded into a page that triggers the IE7 engine.  The library would need to preserve backward compatibility with all the IE versions that could be used.

But isn’t that already the case?  Every library whose source I’ve studied has all kinds of detection, whether it’s feature or browser detection, in order to work with multiple browsers.  I would think that under version targeting, the same thing would be necessary: you do feature detection and work accordingly.  Again, it’s entirely possible I missed something there, so feel free to let me know what it was.

As for the proposed default behavior, where no X-UA-Compatible information gets you the IE7 rendering, I can’t defend that, nor do I have any wish to defend it.  I tried for most of an hour to convince a member of the IE team that the default behavior should be “latest”, not “IE7”, and was unable to make a sufficiently persuasive case—by which I mean a case that overcame the (perceived) needs of the IE team.  My hope is that someone will succeed where I failed.

Because yes, I too want to be able to leave off the meta, leave my server’s HTTP headers alone, and still get the latest and greatest in standards support from IE.whatever.  That’s how browsers have always acted, and it’s what I’m used to handling.  If someone can convince the IE team that doing this would be in their (and their company’s) best interests, then we’ll all be in your debt.  With the growing collection of workarounds, I think it might be possible to convince them that the default behavior we want is going to quickly become the de facto standard, they should go with the flow and make it the default internally.  I don’t know if that will work, but it’s worth a try.

It has to be realized that this may well be the only way for IE to advance its standards support in a reasonable time frame, or at all.  Version targets let them avoid breaking existing sites, especially intranet sites, while fixing and adding their DOM, CSS, and other implementations.  That has to be understood and accepted if the discussion is to be anything more than people talking past each other.  Within the world of IE, they must have a way to uphold backwards compatibility with sites developed under older versions of IE.  Without it, they will largely stop fixing bugs they discover in their standards support.  It really does come down to that.  The fact that their current situation is their own fault is not really relevant to the topic of moving forward.  This is a way forward for IE, just as the DOCTYPE switch was a way forward for a number of browsers (including IE) back at the turn of the millennium.  It may be the best way.  If there’s a better way for them to meet that need, then I absolutely want to hear it.  But remember, “let old sites break” is a non-starter.  You might as well say “let old sites not load at all in any browser”.

At any rate, I’m opening comments on this post, and I do hope there will be reasoned discussion of the pros and cons of version targeting.  As always, I’m going to enforce civility in the discussion.  Disagreement, opposition, objection: all fine, and in fact encouraged.  Flaming: not fine, and will be deleted.

February 2017
SMTWTFS
January  
 1234
567891011
12131415161718
19202122232425
262728  

Archives

Feeds

Extras