meyerweb.com

Skip to: site navigation/presentation
Skip to: Thoughts From Eric

Archive: 'Browsers' Category

Wanted: Layout System

(This is part of the Feedback on ‘WaSP Community CSS3 Feedback 2008’ series.)

Not surprisingly, there was a lot of community feedback asking for better layout mechanisms.  Actually, people were asking for any decent layout mechanism at all, which CSS has historically lacked.  Floats mostly work, but they’re a hack and can be annoyingly fragile even when you ignore old-browser bugs.  Positioning works in limited cases, but does not handle web-oriented layout at all well.

Why do we use floats for layout, anyway?  clear.  That’s pretty much the whole answer.  The unique in-flow/out-of-flow nature of floats means they interact with each other and with the normal flow, which means they can be cleared, which makes them useful.  Because with clear, we can float layout blocks around and then push other non-floated blocks, like footers, below the floats.

Positioning, of course, permits total layout freedom in the sense that you can put a layout block anywhere with respect to its containing block.  The downfall is that absolutely positioned elements are entirely out of the normal flow, so they can’t stay out of each others’ way like floats do, and you can’t clear anything with respect to a positioned element.  If there had been a position-clear or its equivalent from the outset, we’d never have bothered with floats.

(And if we can just add position-clear to CSS, that would be completely awesome.  It’s been done with JavaScript and it will most likely be done again and better.  It wouldn’t even be that hard to implement, at least for 99.5% of cases.)

All this is why the old “only use tables for layout” argument keeps coming up over and over: strip away the overheated rhetoric and obvious link-baiting, and you find the core of a real need.  Because as powerful as CSS can be, table cells do certain things very easily that CSS makes very, very hard.  Cells stretch vertically, keeping equal heights as a matter of their intrinsic nature.  They stay out of each others’ way, while still being allowed to sit next to each other and use any sizing dimensions.  They tie their layout to their parent elements, and vice versa.

There are no equivalents in CSS.  There have been various very clever attempts to replicate bits and pieces of those capabilities using CSS.  What CSS does, it does very well: if you don’t need equal-height layout blocks, then no problem.  If you do, it’s a massive pain.  Clever techniques provide substitutes, but can’t replace what tables already do.

And please, let’s put the whole “display: table-cell will grant those abilities through CSS” to rest.  Saying that is just saying “use tables for layout” with different words.  Turning a bunch of divs or list items or whatever into table-role boxes is no better than just using table markup in the first place, and it’s arguably worse.  Using element names other than table and td to create layout tables, and then claiming it’s not using tables for layout, borders on self-deception.

Not to mention doing things that way means you’re doing your layout in a highly source-order-dependent fashion, which was one of the things about table layout we were trying to get away from in the first place.

So how do we get really powerful source-order-independent layout?  I wish I knew.  The Advanced Layout module has been sitting around for a while now, and even if you’re a fan of defining layout as ASCII art—which I find repels and appeals in equal measure, but that’s probably just me—there appears to be close to zero implementor interest.  So how do we get those abilities in a form that implementors will, y’know, implement?  I don’t know.  I don’t care.  We just need it, and have needed it for a good decade or so.  Without it, CSS is a styling language but not a layout language.  We’ve bent it into being something close to a layout language, which is nice but not really ideal.

Maybe CSS isn’t the place for this.  Maybe there needs to be a new layout language that can be defined and implemented without regard to the constraints of the existing CSS syntax rules, without worrying about backwards compatibility.  Maybe that way we can not only get strong layout but also arbitrary shapes, thus leaving behind the rectangular prison that’s defined the web for almost two decades.

I don’t have a concrete idea to propose here, because it’s not up to us any more.  A solution was worked out over the course of several years and then found wanting by the implementors.  Really, it’s up to the implementors to figure it out now.  I personally would like to just lock the browser teams from Microsoft, Mozilla, Opera, and Apple in a room and not let them out until they’ve defined something that works and they’ve all agreed to implement soonest.  I might even supply food and water.

And yes, I just advocated doing this outside the W3C process.  Why wouldn’t I?  The process has, in the last decade, not produced anything even remotely resembling an answer to this problem.  Time to try another path and see if it gets any closer to the goal.

No doubt someone’s going to spin this as “See, even noted standards zealot Eric Meyer now says CSS is flawed!”—only they’ll be wrong because this isn’t a now thing.  I’ve been saying this for years in interviews, in person, and in general.  Any time someone asks me what CSS is missing or should do better, the answer has always been a variant on “a strong layout system”.  I’ve been saying it for at least a decade.  So I’m not saying it now.  I’m saying it again.  And again and again and again and…

If I sound frustrated, it’s because I am, and have been for a good long while.  I’m not the only one.  It rankles to have CSS be, as Winston Churchill would have put it, the worst form of layout except for all the others that have been tried.

An Event Apart and HTML 5

The new Gregorian year has brought a striking new Big Z design to An Event Apart, along with the detailed schedule for our first show and the opening of registration for all four shows of the year.  Jeffrey has written a bit about the thinking that went into the design already, and I expect more to come.  If you want all the juicy details, he’ll be talking about it at AEA, as a glance at the top of the Seattle schedule will tell you.  And right after that?  An hour of me talking about coding the design he created.

One of the things I’ll be talking about is the choice of markup language for the site, which ended up being HTML 5.  In the beginning, I chose HTML 5 because I wanted to do something like this:

<li>
<a href="/2009/seattle/">
<h2><img src="/i/09/city-seattle.jpg" alt="Seattle" /></h2>
<h3>May 4—5, 2009</h3>
<p>Bell Harbor International Conference Center</p>
</a>
</li>

Yes, that’s legal in HTML 5, thanks to the work done by Bruce Lawson in response to my href-anywhere agitation.  It isn’t what I’d consider ideal, structurally, but it’s close.  It sure beats having to make the content of every element its own hyperlink, each one pointing at the exact same destination:

<li>
<h2><a href="/2009/seattle/"><img src="/i/09/city-seattle.jpg" alt="Seattle" /></a></h2>
<h3><a href="/2009/seattle/">May 4—5, 2009</a></h3>
<p><a href="/2009/seattle/">Bell Harbor International Conference Center</a></p>
</li>

I mean, that’s just dumb.  Ideally, I could drop an href on the li instead of having to wrap an a element around the content, but baby steps.  Baby steps.

So as Bruce discovered, pretty much all browsers will already let you wrap a elements around other stuff, so it got added to HTML 5.  And when I tried it, it worked, clickably speaking.  That is, all the elements I wrapped became part of one big hyperlink, which is what I wanted.

What I didn’t want, though, was the randomized layout weirdness that resulted once I started styling the descendants of the link.  Sometimes everything would lay out properly, and other times the bits and pieces were all over the place.  I could (randomly) flip back and forth between the two just by repeatedly hitting reload.  I thought maybe it was the heading elements that were causing problems, so I converted them all to classed paragraphs.  Nope, same problems.  So I converted them all to classed spans and that solved the problem.  The layout became steady and stable.

I was happy to get the layout problems sorted out, obviously.  Only, at that point, I wasn’t doing anything that required HTML 5.  Wrapping classed spans in links in the place of other, more semantic elements?  Yeah, that’s original.  It’s just as original as the coding pattern of “slowly leaching away the document’s semantics in order to make it, at long last and after much swearing, consistently render as intended”.  I’m sure one or two of you know what that’s like.

As a result, I could have gone back to XHTML 1.1 or even HTML 4.01 without incident.  In fact, I almost did, but in the end I decided to stick with HTML 5.  There were two main reasons.

  1. First, AEA is all about the current state and near future of web design and development.  HTML 5 is already here and in use, and its use will grow over time.  We try to have the site embody the conference itself as much as possible, so using HTML 5 made some sense.

  2. I wanted to try HTML 5 out for myself under field conditions, to get a sense of how similar or dissimilar it is to what’s gone before.  Turns out the answers are “very much so” to the former and “frustratingly so” to the latter, assuming you’re familiar with XHTML.  The major rules are pretty much all the same: mind your trailing slashes on empty elements, that kind of thing.  But you know what the funniest thing about HTML 5 is?  It’s the little differences.  Like not permitting a value attribute on an image submit.  That one came as a rather large surprise, and as a result our subscribe page is XHTML 1.0 Transitional instead of HTML 5.  (Figuring out how to work around this in HTML 5 is on my post-launch list of things to do.)

    Oh, and we’re back to being case-insensitive.  <P Class="note"> is just as valid as <p class="note">.  Having already fought the Casing Wars once, this got a fractional shrug from me, but some people will probably be all excited that they can uppercase their element names again.  I know I would’ve been, oh, six or seven years ago.

    Incidentally, I used validator.nu to check my work.  It seemed the most up to date, but there’s no guarantee it’s perfectly accurate.  Ged knows every other validator I’ve ever used has eventually been shown to be inaccurate in one or more ways.

I get the distinct impression that use of HTML 5 is going to cause equal parts of comfort (for the familiar parts) and eye-watering rage (for the apparently idiotic differences).  Thus it would seem the HTML 5 Working Group is succeeding quite nicely at capturing the current state of browser behavior.  Yay, I guess?

And then there was the part where I got really grumpy about not being able to nest a hyperlink element inside another hyperlink element… but that, like so many things referenced in this post, is a story for another day.

JavaScript Will Save Us All

A while back, I woke up one morning thinking, John Resig’s got some great CSS3 support in jQuery but it’s all forced into JS statements.  I should ask him if he could set things up like Dean EdwardsIE7 script so that the JS scans the author’s CSS, finds the advanced selectors, does any necessary backend juggling, and makes CSS3 selector support Transparently Just Work.  And then he could put that back into jQuery.

And then, after breakfast, I fired up my feed reader and saw Simon Willison‘s link to John Resig’s nascent Sizzle project.

I swear to Ged this is how it happened.

Personally, I can’t wait for Sizzle to be finished, because I’m absolutely going to use it and recommend its use far and wide.  As far as I’m concerned, though, it’s a first step into a larger world.

Think about it: most of the browser development work these days seems to be going into JavaScript performance.  Those engines are being overhauled and souped up and tuned and re-tuned to the point that performance is improving by orders of magnitude.  Scanning the DOM tree and doing things to it, which used to be slow and difficult, is becoming lightning-fast and easy.

So why not write JS to implement multiple background-image support in all browsers?  All that’s needed is to scan the CSS, find instances of multiple-image backgrounds, and then dynamically add divs, one per extra background image, to get the intended effect.

Just like that, you’ve used the browser’s JS to extend its CSS support.  This approach advances standards support in browsers from the ground up, instead of waiting for the browser teams to do it for us.

I suspect that not quite everything in CSS3 will be amenable to this approach, but you might be surprised.  Seems to me that you could do background sizing with some div-and-positioning tricks, and text-shadow could be supportable using a sIFR-like technique, though line breaks would be a bear to handle.  RGBa and HSLa colors could be simulated with creative element reworking and opacity, and HSL itself could be (mostly?) supported in IE with HSL-to-RGB calculations.  And so on.

There are two primary benefits here.  The first is obvious: we can stop waiting around for browser makers to give us what we want, thanks to their efforts on JS engines, and start using the advanced CSS we’ve been hearing about for years.  The second is that the process of finding out which parts of the spec work in the real world, and which fall down, will be greatly accelerated.  If it turns out nobody uses (say) background-clip, even given its availability via a CSS/JS library, then that’s worth knowing.

What I wonder is whether the W3C could be convinced that two JavaScript libraries supporting a given CSS module would constitute “interoperable implementations”, and thus allow the specification to move forward on the process track.  Or heck, what about considering a single library getting consistent support in two or more browsers as interoperable?  There’s a chance here to jump-start the entire process, front to back.

It is true that browsers without JavaScript will not get the advanced CSS effects, but older browsers don’t get our current CSS, and we use it anyway.  (Still older browsers don’t understand any CSS at all.)  It’s the same problem we’ve always faced, and everyone will face it differently.

We don’t have to restrict this to CSS, either.  As I showed with my href-anywhere demo, it’s possible to extend markup using JS.  (No, not without breaking validation: you’d need a custom DTD for that.  Hmmm.)  So it would be possible to use JS to, say, add audio and video support to currently-available browsers, and even older browsers.  All you’d have to do is convert the HTML5 element into HTML4 elements, dynamically writing out the needed attributes and so forth.  It might not be a perfect 1:1 translation, but it would likely be serviceable—and would tear down some of the highest barriers to adoption.

There’s more to consider, as well: the ability to create our very own “standards”.  Maybe you’ve always wanted a text-shake property, which jiggles the letters up and down randomly to look like the element got shaken up a bit.  Call it -myCSS-text-shake or something else with a proper “vendor” prefix—we’re all vendors now, baby!—and go to town.  Who knows?  If a property or markup element or attribute suddenly takes off like wildfire, it might well make it into a specification.  After all, the HTML 5 Working Group is now explicitly set up to prefer things which are implemented over things that are not.  Perhaps the CSS Working Group would move in a similar direction, given a world where we were all experimenting with our own ideas and seeing the best ideas gain widespread adoption.

In the end, as I said in Chicago last week, the triumph of standards (specifically, the DOM standard) will permit us to push standards support forward now, and save some standards that are currently dying on the vine.  All we have to do now is start pushing.  Sizzle is a start.  Who will take the next step, and the step after that?

Need Help With Table Row Events

Here’s a late-week call for assistance in the JavaScript realm, specifically in making IE do what I need and can make happen in other browsers.  I’d call this a LazyWeb request except I’ve been trying to figure out how to do it all [censored] afternoon, and it doesn’t [censored] work no matter how many [censored] semi-related examples I find online that work just [censored] fine, but still don’t [censored] help me [censored] fix this [censored] problem.  [doubly censored]!

I have a table.  (Yes, for data.)  In the table are rows, of course, and each row has a number of cells.  I want to walk through the rows and dynamically add an ‘onclick’ event to every row.  The actual event is slightly different for each row, but in every case, it’s supposed to call a function and pass some parameters (which are the things that change).  Here’s how I’m doing it:

var event = '5'; // in the real code this is passed into the surrounding function
var mapStates = getElementsByClassName('map','tr');
for (x = 0; x < mapStates.length; x++) {
	var el = mapStates[x];
	var id = el.getAttribute('id');
	var val = "goto('" + id + "','" + event + "');";
	el.setAttribute('onclick',val);
}

Okay, so that works fine in Gecko.  It doesn't work at all in IE.  I changed el.setAttribute('onclick',val); to el.onclick = val; per some advice I found online and that completely failed in everything.  Firebug told me "el.onclick is not a function".  Explorer just silently did nothing, like always.

So how am I supposed to make this work in IE, let alone in IE and Gecko-based and WebKit-based and all other modern browsers?

Oh, and do not tell me that framework X or library Q does this so easily, because I'm trying to learn here, not have someone else's code hand-wave the problem away.  Pointing me directly to the actual code block inside a framework or library that makes this sort of thing possible:  that's totally fine.  I may not understand it, but at least there will be JS for me to study and ask questions about.  Ditto for pointing me to online examples of doing the exact same thing, which I tried to find in Google but could not: much appreciated.

Help, please?

Update: many, many commenters helped me see what I was missing and therefore doing wrong---thank you all!  For those wondering what I was wondering, check out the comments.  There are a lot of good examples and quick explanations there.

Characteristic Confusion

In the course of building my line-height: normal test page, I settled on defaulting to an unusual but still pervasive font family: Webdings.  The idea was that if you picked a font family in the dropdown and you didn’t have it installed, you’d fall back to Webdings and it would be really obvious that it had happened.

(A screenshot of the symbols expected from Webdings: an ear, a circle with a line through the middle, and a spider.)

Except in Firefox 3b5, there were no dings, web or otherwise.  Instead, some serif-family font (probably my default serif, Times) was being used to display the text “Oy!”.

It’s a beta, I thought with a mental shrug, and moved on.  When I made mention of it in my post on the subject, I did so mainly so I didn’t get sixteen people commenting “No Webdings in Firefox 3 betas!” when I already knew that.

So I didn’t get any of those comments.  Instead, Smokey Ardisson posted that what Firefox 3 was doing with my text was correct.  Even though the declared fallback font was Webdings, I shouldn’t expect to see it being used, because Firefox was doing the proper Unicode thing and finding me a font that had the character glyphs I’d requested.

Wow.  Ignoring a font-family declaration is kosher?  Really?

Well, yes.  It’s been happening ever since the CSS font rules were first implemented.  In fact, it’s the basis of the whole list-of-alternatives syntax for font-family.  You might’ve thought that CSS says browsers should look to see if a requested family is available and then if not look at the next one on the list, and then goes to render text.  And it does, but it says they should do that on a per-character basis.

That is, if you ask for a character and the primary font face doesn’t have it, the browser goes to the next family in your list looking for a substitute.  It keeps doing that until it finds the character you wanted, either in your list of preferred families or else in the browser’s default fonts.  And if the browser just can’t find the needed symbol anywhere at all, you get an empty box or a question mark or some other symbol that means “FAIL” in font-rendering terms.

A commonly-cited case for this is specifying a CJKV character in a page and then trying to display it on a computer that doesn’t have non-Romance language fonts installed.  The same would hold true for any page with any characters that the installed fonts can’t support.  But think about it: if you browse to a page written in, say, Arabic, and your user style sheet says that all elements’ text should be rendered in New Century Schoolbook, what will happen?  If you have fonts that support Arabic text, you’re going to see Arabic, not New Century Schoolbook.  If you don’t, then you’re going to see a whole lot of “I can’t render that” symbols.  (Though I don’t know what font those symbols will be in.  Maybe New Century Schoolbook?  Man, I miss that font.)

So: when I built my test, I typed “Oy!” for the example text, and then wrote styles to use Webdings to display that text.  Here’s how I represented that, mentally: the same as if I’d opened up a text editor like, oh, MS Word 5.1a; typed “Oy!”; selected that text; and then dropped down the “Font” menu and picked “Webdings”.

But here’s how Firefox 3 dealt with it: I asked for the character glpyhs “O”, “y”, and “!”; I asked for a specific font family to display that text; the requested font family doesn’t contain those glyphs or anything like them; the CSS font substitution rules kicked in and the browser picked those glyphs out of the best alternative.  (In this case, the browser’s default fonts.)

In other words, Firefox 3 will not show me the ear-Death Star-spider combo unless I put those exact symbols into my document source, or at least Unicode references that call for those symbols.  Because that’s what happens in a Unicode world: you get the glyphs you requested, even if you thought you were requesting something else.

The problem, of course, is that we don’t live in a Unicode world—not yet.  If we did, I wouldn’t keep seeing line noise on every web page where someone wrote their post in Word with smart quotes turned on and then just did a straight copy-and-paste into their CMS.  (Here we have a screenshot of text where a bullet symbol has been mangled into an a-rhone and an American cents sign, thus visually turning 'Wall-E' into 'Wallace'.)  Ged knows I would love to be in a Unicode world, or indeed any world where such character-incompatibility idiocy was a thing of the past.  The fact that we still have those problems in 2008 almost smacks of willful malignance on the part of somebody.

Furthermore, in most (but not all) of the text editors I have available and tested, I can type “Oy!” with the font set to Webdings and get the ear, Death Star, and spider symbols.  So mentally, it’s very hard to separate those glyphs from the keyboard characters I type, which makes it very hard to just accept that what Firefox 3 is doing is correct.  Instinctively, it feels flat-out wrong.  I can trace the process intellectually, sure, but that doesn’t mean it has to make sense to me.  I expect a lot of people are going to have similar reactions.

Having gone through all that, it’s worth asking: which is less correct?  Text editors for letting me turn “Oy!” into the ear-Death Star-spider combo, or Firefox for its rigid glyph substitution?  I’m guessing that the answer depends entirely on which side of the Unicode fence you happen to stand.  For those of us who didn’t know there was a fence, there’s a bit of a feeling of a slip-and-fall right smack onto it, and that’s going to hurt no matter who you are.

line-height: abnormal

When I first wrote Cascading Style Sheets: The Definitive Guide, the part that caused me the most difficulty and headaches was the line layout material.  Several times I was sure I had it all figured out and accurately described, only to find out I was wrong.  For two weeks I corresponded with Ian Hickson and David Baron, arguing for my understanding of things and having them show me, in merciless detail, how I was wrong.  I doubt that I will ever stop owing them for their dedication to getting me through the wilderness of my own misunderstandings.

Later on, I produced a terse description of line layout which went through a protracted vetting process with the CSS Working Group and the members of www-style.  At the time it was published, there was no more detailed and accurate description of line layout available.  Even at that, corrections trickled in over the years, which made me think of it as my own tiny little The Art of Computer Programming.  Only without the small monetary reward for finding errors.

The point here is that line layout is very difficult to truly understand—even given everything I just said, I’m still not convinced that I do—and that there are often surprises lurking for anyone who goes looking into the far corners of how it happens.  As I’ve said before, my knowledge of what goes into the layout of lines of text imparts a sense of astonishment that any page can be successfully displayed in less than the projected age of the universe.

Why bring all this up?  Because I went and poked line-height: normal with a stick, and found it to be both squamous and rugose.  As with all driven to such madness, I now seek, grinning wildly, to infect others.

Here’s the punchline: the effects of declaring line-height: normal not only vary from browser to browser, which I had expected—in fact, quantifying those differences was the whole point—but they also vary from one font face to another, and can also vary within a given face.

I did not expect that.  At least, not consciously.

My work, let me show it to you: a JavaScript-driven test file where you can pick from a list of fonts and see what happens at a variety of sizes.  (Yes, the JS is completely obtrusive; and yes, the JS is the square of amateur hour.  Let’s move on, please.  I’m perfectly happy to replace what’s there with unobtrusive and sharper JS, as long as the basic point of the page, which is testing line-height: normal, is not compromised.  Again, moving on.)

When you first go to the test, you should (I hope) see a bunch of rulered boxes containing text using the very common font face Webdings, set at a bunch of different font sizes.  The table shows you how tall the simple line boxes are at each size, and therefore the numeric equivalent for line-height: normal at those sizes.  So if a line box is using font-size: 50px and the line box is 55 pixels tall, the numeric equivalent for line-height: normal is 1.1 (55 divided by 50).

On my PowerBook, Webdings always yields a 1:1 ratio between the font-size and line box height.  The ten-pixel font size yields a ten-pixel-tall line box, and so on.

This is actually a little surprising by itself.  The CSS 2.1 specification says:

normal
Tells user agents to set the used value to a “reasonable” value based on the font of the element. The value has the same meaning as <number>. We recommend a used value for ‘normal’ between 1.0 to 1.2. The computed value is ‘normal’.

This is basically what CSS has said since its first days (see the equivalent text in CSS1 or in CSS2 for confirmation) and there’s always been a widespread assumption that, since 1.0 is probably too crowded, something around 1.2 is much more likely.

So finding a value of 1 was a surprise.  It was an even bigger surprise to me that this held true in Camino 1.5.2, Firefox 2.0.0.14, and Safari 2.0.4, all on OS X.  Firefox 3b5 didn’t render Webdings at all, so I don’t know if it would do the same.  I actually suspect not, for reasons best left for another time (and, possibly, a final release of Firefox 3).

Various browsers doing the same thing in an under-specified area of the spec?  That can’t be right.  It’s pretty much an article of faith that given the chance to do anything differently, browsers will.  The sailing was so unexpectedly smooth that I immediately assumed was that a storm lurked just over the horizon.

Well, I was right.  All I had to do was start picking other font faces.

To start, I picked the next font on the list, Times New Roman, and the equivalent values for normal immediately changed.  In other words, the numeric equivalents for Times New Roman are different than those for Webdings.  The browsers weren’t maintaining a specific value for normal, but were altering it on a per-face basis.

Now, this is legal, given the way normal is under-specified.  There’s room to allow for this behavior.  It’s actually, once you think about it, a fairly good thing from a visual point of view: the best default line height for Times New Roman is probably not the best default line height for Courier New.  So while I was initially surprised, I got over it quickly.  The seemingly obvious conclusion was that browsers were actually respecting the fonts’ built-in metrics.  This was reinforced when I found that the results were exactly the same from browser to browser.

Then I looked more closely at the numbers, and confusion set back in.  For Times New Roman, I was getting values of 1.1, 1.12, 1.16, 1.15, 1.149, and 1.1499.  If you were to round all of those numbers to two decimal points, you’d get 1.10, 1.12, 1.16, 1.15, 1.15, 1.15.  If you round them all to one decimal place, you’d get 1.1, 1.1, 1.2, 1.2, 1.1, 1.1.  They’re inconsistent.

But wait, I thought, I’m trying to compare numbers I derived by dividing pixels by pixels.  Let’s turn it around.  If I multiply the most precise measurement I’ve gotten by the various font sizes, I get… carry the two… 11.499, 28.7475, 57.495, 114.99, 1149.9, 11499.  As compared to the actual values I got, which were 11, 28, 58, 115, 1149, and 11499.

Which means the results were inappropriately rounded up in some cases and down in others.  28.7475 became 28 and 1149.9 became 1149, whereas 57.495 became 58.  Even though 11.499 became 11 and 114.99 became 115.

This was consistent across all the browsers I was testing.  So again, I was suspecting the fonts themselves.

And then I switched from Times New Roman to just plain old Times, and the storm was full upon me.  I’ll give you the results in a table.

Derived normal equivalents for Times in OS X browsers
font-size Camino 1.5.2 Firefox 2.0.0.14 Safari 2.0.4
10 1 1.2 1.3
25 1 1 1.16
50 1 1 1.18
100 1 1 1.15
1000 1 1 1.15
10000 1 1 1.15

Much the same happened when comparing Courier New with plain old Courier: full consistency on Courier New between browsers, albeit with the same strange (non-)rounding effects as seen with Times New Roman; but inconsistency between browsers on plain Courier—with Camino yielding a flat 1 down the line, Firefox going from 1.2 to 1, and Safari having a range of values above the others’ values.

Squamous!  Not to mention rugose!

Now it’s time for the stunning conclusion that derives from all this information, which is: not here.  Sorry.  So far all I have are observations.  I may turn all this into a summary page which shows the results for all the font faces across multiple browsers and platforms, but first I’ll need to get those numbers.

I do have a few speculations, though:

  1. Firefox’s inconsistency within font faces (see Times and Courier, above) may come from face substitution.  That’s when a browser doesn’t have a given character in a given face, so it looks for a substitute in another face.  If Firefox thinks it doesn’t have 10-pixel Times, it might substitute 10-pixel something else serif-ish, and that face has different line height characteristics than Times.  I don’t know what that other face might be, since it’s not Times New Roman or Georgia, but this is one possibility.  It is not the minimum font size setting in the preferences, as I’ve triple-checked to make sure I have that set to “None”.

  2. Another possibility for Firefox’s line height weirdness is a shift from subpixel font rendering to pixelly font rendering.  10-pixel text in Firefox is distinctly pixelly compared to the other browsers I tested, while sizes above there are nice and smooth.  Why this would drive up the line height by two pixels (20%), though, is not clear to me.

  3. Much of what I’ve observed will likely be laid to rest at the doorsteps of the font faces themselves.  I’d like to know how it is that the rounding behaviors are so (mathematically) messed up within faces, though.  Perhaps ideal line heights are described as an equation rather than a simple ratio?

Again, this was all done in OS X; I’ll be very interested to find out what happens on Windows, Linux, and other operating systems.  Side note for the Mac Opera fans warming up their flamethrowers: I’ve left Opera 9.27 for OS X out of this because it seems to cap font sizes at a size well below 1000, although this limit varied from one face to another.  Webdings and Courier capped at 507 pixels, whereas Courier capped at 574 pixels and Comic Sans MS stopped at 707 pixels.  I have no explanation, though doubtless someone will, but the upshot is that direct comparisons between Opera and the other browsers are impossible.  For sizes up to 100 pixels, the results were exactly consistent with Camino, if that means anything.

The one tentative conclusion I did reach is this: line-height: normal is a jumbled terrain of inconsistent behaviors, and it’s best avoided in any sort of precision layout work.  I’d already had that feeling, but at least now there’s some evidence to back up the feeling.

In any case, I doubt this is the last I’ll have to say on this particular topic.

Update 7 May 08: I’ve updated the test page with a fix from Ben Lowery so that it works in IE.  Thanks, Ben!  Now all I need is to add a way to type in any arbitrary font-family’s name, and we’ll have something everyone can use.  (Or else a way to use JavaScript to suck up the names of all the fonts installed on a machine and put them into the dropdown.  That would be cool, too.)

Acid Redux

So the feeds I read have been buzzing the past few days with running commentary of the WebKit and Opera teams’ race to be the first to hit 100/100 on Acid3, and then after that the effort to get a pixel-perfect match with the reference image.  Last I saw, Opera claimed to have gotten to 100 first but it looked like WebKit had gotten both with something publicly available, but I haven’t verified any of this for myself.  Nor do I have any particular plans to do so.

Because as lovely as it is to see that you can, in fact, get one or more browser implementation teams to jump in a precisely defined sequence through a series of cunningly (one might say sadistically) placed hoops, half of which are on fire and the other half lined with razor wire, it doesn’t strike me as the best possible use of the teams’ time and energy.

No, I don’t hate standards, though I may hate freedom (depends on who’s asking).  What I disagree with is the idea that if you cherry-pick enough obscure and difficult corners of a bunch of different specifications and mix them all together into a spicy meatball of difficulty, it constitutes a useful test of the specifications you cherry-picked.  Because the one does not automatically follow from the other.

For example, suppose I told you that WebKit had implemented just the bits of SMIL-related SVG needed to pass the test, and that in doing so they exposed a woefully incomplete SVG implementation, one that gets something like 2% pass rates on actual SMIL/SVG tests.  Laughable, right?  Yes, well.

Of course, that’s in a nightly build and they might totally support SMIL by the time the corresponding final version is released and we’ll all look back on this and laugh the carefree laugh of children in springtime.  Maybe.  The real point here is that the Acid3 test isn’t a broad-spectrum standards-support test.  It’s a showpiece, and something of a Potemkin village at that.  Which is a shame, because what’s really needed right now is exhaustive test suites for specifications– XHTML, CSS, DOM, SVG, you name it.  We’ve been seeing more of these emerge recently, but they’re not enough.  I’d have been much more firmly in the cheering section had the effort that went into Acid3 had gone into, say, an obssessively thorough DOM test suite.

I’d had this post in mind for a while now, really ever since Acid3 was released.  Then the horse race started to develop, and I told myself I really needed to get around to writing that post—and I got overtaken.  Well, that’s being busy for you.  It’s just as well I waited, really, because much of what I was going to say got covered by Mike Shaver in his piece explaining why Firefox 3 isn’t going to hit 100% on Acid3.  For example:

Ian’s Acid3, unlike its predecessors, is not about establishing a baseline of useful web capabilities. It’s quite explicitly about making browser developers jump… the Acid tests shouldn’t be fair to browsers, they should be fair to the web; they should be based on how good the web will be as a platform if all browsers conform, not about how far any given browser has to stretch to get there.

That’s no doubt more concisely and clearly stated than I would have managed, so it’s all for the best that he got to say it first.

By the by, I was quite intrigued by this part of Mike’s post:

You might ask why Mozilla’s not racking up daily gains, especially if you’re following the relevant bugs and seeing that people have produced patches for some issues that are covered by Acid3.

The most obvious reason is Firefox 3. We’re in the end-game of building what I really do believe is the best browser the web has ever known, and we expect to be putting it in the hands of more than 170 million users in a pretty short period of time. We’re still taking fixes for important issues, but virtually none of the issues on the Acid3 list are important enough for us to take at this stage. We don’t want to be rushing fixes in, or rushing out a release, only to find that we’ve broken important sites or regressed previous standards support, or worse introduced a security problem. Every API that’s exposed to content needs to be tested for compliance and security and reliability… We think these remaining late-stage patches are worth the test burden, often because they help make the web platform much more powerful, and reflect real-web compatibility and capability issues. Acid3’s contents, sadly, are not as often of that nature.

You know, it’s weird, but that seems really familiar, like I’ve heard or read something like that before.  Now if only I could remember…  Oh yeah!  It’s basically what the IE team said about not passing Acid2 when the IE7 betas came out, for which they were promptly excoriated.

Huh.

Well, never mind that now.  Of course it was a totally different set of circumstances and core motivations, and I’m sure there’s absolutely no parallel to be drawn between the two situations.  At all.

Returning to the main point here:  I’m a little bit sad, to tell the truth.  The original acid test was a prefect example of what I think makes for a good stress test.  Recall that the test’s original name, before it got shorthanded, was the “Box Model Acid Test”.  It was a test of CSS box model handling, including floats.  That’s all it was designed to do.  It did that fairly well for its time, considering it was part of a CSS1 test suite.  It didn’t try to combine box model testing with tests for PNG support, HTML parse error recovery, and DOM scripting.

To me, the ideal CSS test suite is one that has a bunch of basic property/value tests, like the ones I’ve been responsible for creating (1, 2), along with a bunch of acid tests for specific areas or concepts in that specification.  So an acidified CSS test suite would have individual acid tests for the box model, positioning, fonts, selectors, table layout, and so on.  It would not involve scripting or markup parsing (beyond what’s needed to handle selectors).  It would not use animated SVG icons.  Hell, it probably wouldn’t even use PNGs, except possibly alphaed PNGs when testing opacity and RGBA colors.  And maybe not even then.

So in a DOM test suite, you’d have one test page for each method or attribute, and then build some acid tests out of related bits (say, on an entire interface or set of closely related interfaces).  And maybe, at the end, you’d build an overarching acid test that rolled verything in the DOM spec into one fiendishly difficult test.  But it would be just about the DOM and whatever absolute minimum of other stuff you needed, like text rendering and maybe GIF support.  (Similarly, the CSS tests had to assume some basic HTML and CSS selector support, or else everything else fell down.)

And then, after all those test suites have been built up and a series of acid tests woven into them, with each one culminating in its own spec-spanning acid test, you might think about taking those end-point acid tests and slamming them all together into one super-ultra-hyper-mega acid test, something that even the xenomorphs from the Alien series would look at and say, “That’s gonna sting”.  That would be awesome.  But that’s not what we have.

I fully acknowledge that a whole lot of very clever thinking went into the construction of Acid3 (as was true of Acid2), and that a lot of very smart people have worked very hard to pass it.  Congratulations all around, really.  I just can’t help feeling like some broader and more important point has been missed.  To me, it’s kind of like meeting the general challenge of finding an economical way to loft broadband transceivers to an altitude of 25,000 feet (in order to get full coverage of large metropolitan areas while avoiding the jetstream) by daring a bunch of teams to plant a transceiver near the summit of Mount Everest—and then getting them to do it.  Progress toward the summit can be demonstrated and kudos bestowed afterward, but there’s a wider picture that seems to have been overlooked in the process.

Drugs, Bugs, and IE8

If there’s a downside to becoming a cyborg, it’s the aftermath.  I’m not talking about the dystopian corporate-state shenanigans: those are fully expected.  No, it’s the painkillers that really suck.  They basically do their job, but at the cost of mental acuity.  That is not a trade I’m happy to make.  Granted, there were some interesting physical hallucinations that came along for the ride, but that’s nowhere near enough to balance the scales.

Here’s what I mean on that last part.  At one point yesterday, lying in bed as I had been all day, I decided it was about time to straighten out my legs, which were crossed at the ankles and starting to feel a little funny.  When I sent the relevant signals to my legs, nothing really happened.  Slowly I came to realize that nothing was happening because my legs weren’t actually crossed at all.  Furthermore, it gradually dawned on me that if the sensoria I’d been getting had been correct, it would have to mean that my legs were not only crossed at the ankles, but also attached to my body backwards.

So anyway, I thought I’d write up some of my observations (thus far) regarding IE8 beta 1.  What?

I’m going to say basically the same thing I said about the first betas of IE7: test and report, but don’t fix.  That is to say, you should absolutely grab it and run it across all your own sites, and all your common destinations.  Find out what’s different, broken, or just plain strange.

But don’t start searching for workarounds.  Not yet.  Submit bug reports, yes.  Boil down the problems you hit to basic test cases and submit those, if you like.  (I do like, but I’ve got kind of a history with that sort of thing.)  Just don’t think that beta 1 represents what we’ll face in the final release.

No, I don’t have some sort of inside track; never have.  That conclusion simply seems obvious to me just by looking at how this beta acts.  For example, there’s no support at all for :first-line and :first-letter.  That’s not just a glitch.  That’s a lack of support for a CSS feature that’s been present for three major releases.  I just can’t see that omission persisting to final release.

Another problem I noticed is evident here on the home page of meyerweb.  In the sidebar, each list item has a left margin and negative text indentation, creating a classic “outdent”.  Like so:

#extra .panel li {margin-left: 1em; text-indent: -1em;}

In each of those list items is a link of some kind, usually text.  The fun part is this: the hanging outdent part of that text isn’t clickable.  So the first couple of letters of each sidebar link are inactive.  They’re colored properly, but do nothing if you try to click them.  If you click on the active part of a link, the focus outline only draws around the active part.  And, for bonus yay, scrolling the page will wipe away any outdents that are offscreen.  So as you scroll down the page, you end up with all the sidebar links having their first few letters chopped off.  Whoops.

Again, that’s something I just can’t see going unaddressed in the final release.

In both these cases, flipping IE8 back to IE7 mode makes the weirdness go away.

I’ve seen more serious problems on the wider web.  Google Maps is currently busted beyond any hope of usefulness in IE8, as many have reported.  Also, I came across a site where loading the home page just locked up IE8 completely.  I had to force-quit and relaunch.  Every time I hit that page, lockup.

Flipping to IE7 mode allowed me to browse the site without any trouble at all.

These things, taken together, have really driven something home for me: there really is a new rendering engine in there.  I don’t just mean in the sense of fixing and adding enough things that the behavior is different.  I mean that I believe there’s truly a whole new engine under the hood of IE8.  And if the Acid 2 results and public statements of the IE team are to be believed, there’s a whole new standards-based rendering engine under that hood.

That’s kind of a big deal in any event.  The last time I remember a browser with an extended release history replacing its old, creaky, grown-over-time, crap-piled-on-crap engine with (what the browser team felt was) a new, improved one was the transition from Netscape 4.5 to Netscape 6.0.  And remember how well that went?  Yee haw.

I really shouldn’t be surprised about this.  Chris Wilson, for example, used the exact words “our new layout engine” during the WaSP roundtable (transcript).  I guess I’d been assuming that was verbal shorthand for “our much-improved version of our old layout engine”.  I guess I was wrong.

So I would personally argue that this release was mislabelled.  This is not a beta release.  As far as I’m concerned, it’s an alpha, even under the kinds of old-school naming conventions I prefer.  I’m not going to go around calling it that, because that would just be unnecessarily confusing, but it’s how I’m going to think of it.

Now I’m wondering just how long it will be until final release, given the kinds of distances one usually sees between alpha and final.

Unfortunately, I just took the 6pm set of painkillers, so I’ll be wondering at about one-third speed.

April 2016
SMTWTFS
March  
 12
3456789
10111213141516
17181920212223
24252627282930

Archives

Feeds

Extras