Posts from Tuesday, June 24th, 2003

Testing for Flaws

Published 20 years, 8 months past

Chris Hester wrote earlier to point out the CSS2 Test Suite’s main page was completely unreadable in Internet Explorer 6 if you had the “Text Size” set to “Smaller.”  This was news to me; I’d set my text to “Medium” the day I installed IE6 and never looked back.  So I went to the page, changed my text size, and winced.

The problem’s since been worked around, but to see the problem as well as read about the trigger and the solution, try this testcase.  Note that if you’re using IE6 and your browser is set to “Smaller” the testcase will start out completely unreadable.  Set it to “Medium” first and then go to the page and follow the directions.

The flaws in IE6 continue to amaze me, and now we’re stuck with it for another three years, minimum.  Great, just great.

Dave Hyatt recently made some observations about standards-support charts (starting with Standards Charts and continuing into three posts the next day).  I agree with most of what he has to say, actually.  Charts like the “master grid” are by their nature coarse.  They can do no better than provide support information for whichever tests the chart author happened to run.  In presenting an overview and comparison of CSS support, for example, depth-of-implementation testing is sacrificed.  It has to be.  The CSS support charts I published on Web Review for years, and now on DevEdge, are basically the work of one person: me.  I wrote most of what became the W3C’s CSS1 Test Suite in the creation of the original charts, back in late 1996 and early 1997.  Back then, it was easier—bugs were more obvious, and all implementations were shallow.  The charts could afford to be as shallow.

Now, thanks to years of experience, implementations are getting much, much better, and the bugs harder to find.  To fully test modern CSS implementations requires a far more complex set of tests than I could author in a lifetime of evenings (which is when I wrote the tests and the charts).  To be really comprehensive, you’d need to test every property and value combination on every element in HTML (or a markup language of similar complexity), which I think was once calculated to run into a few trillion combinations.  It’s a lot harder to create tests, to run tests, and to chart results than it used to be.  This fact was driven home to me recently as I worked on (finally!) updating the CSS charts.  For the tests I have at hand, most browsers score perfectly, or close to it.  I know that’s not true: every browser has bugs in its CSS support, some worse than others (*cough*WinIE*cough*).

(Aside: I feel either amused or gratified that there’s support for the concept of penalizing browsers for having bugs, a concept I used in compiling the “CSS leader board,” back in the day.  “Full” support earned a point, partial support got half a point, no support got zero, and a bug lost you half a point.  It was a touch crude, perhaps, but it worked.)

But I only have so many hours in every day, the same as anyone else.  It’s not reasonable to expect one or even five people to meet this challenge.  The only way to handle it is to find a moderately large crowd of CSS experts, all of whom trust the others completely, and distribute deep-test creation among them.  In a few months, they may have gotten far enough to run browsers through their tests.  A month or so after that, they could start compiling results, and eventually publish them.  But even assuming all of that data could be collected and presented, how useful would it really be to the Web community?  One of the keys to the original CSS support charts’ success was that they were easy to comprehend: their very shallowness made them useful.  Authors don’t have time for much more.

Implementors have different needs, of course.  If those needs are strong enough, they’re going to need to fund positions (and I do mean more than one) to coordinate the work necessary to fulfill their needs.  The money could come out of the Quality Assurance budget, even.  In any case, if standards support testing is a serious problem, then we’ll need a serious commitment to address it.  Who’s going to step up to the plate?


Browse the Archive