Thoughts From Eric Archive

It’s Beginning To Snow

Published 18 years, 10 months past

So yesterday I was going to post about getting our first snowfall of the year, but Buffalo’s kind of stolen those bragging rights.  I know how they feel: almost ten years ago, Cleveland got hit with one hell of an early November storm.  On a Thursday afternoon, it was 70 degrees Fahrenheit when a cold front slammed into the city, spawning three tornadoes and dropping the temperature to the freezing point in the space of about three hours.  The winds off the lake brought sleet, then snow… four days of snow.

From the e-mail I sent to my “friends” list a few days later:

Things really picked up Saturday afternoon and evening, as I discovered when I made the mistake of trying to return to Cleveland that night— and let me tell you, the money I paid for anti-lock brakes and traction control was worth it, ten times over.  I’d probably be dead or badly injured right now if it weren’t for one or the other of those systems.  It was BAD out there.

By Tuesday, the snow depth in the University Circle area was roughly two feet— that’s average depth, not drift depth— and we’re not even in the Snow Belt.  I understand they have about twice the amount of snow, but I haven’t ventured east to find out.  To make things worse, this is heavy, wet, break-your-back-trying-to-shovel-it snow.  Anyway, there are a lot of trees which the snow has simply snapped in half— and they still have their fall colors.  I saw a maple tree the other day with brilliant red leaves peeking through a heavy blanket of snow.  Weird.  But very pretty, and more than a little fascinating.

[The snowstorms] were also thunderstorms.  I’ve seen an occasional, rare flash of lightning during a heavy snowstorm maybe five other times in my life.  In the course of one evening, I saw the sky light up twice that many times, and witnessed cloud-to-cloud lightning over Lake Erie, all while snow fell.

Lightning during a heavy snowstorm is an eerie thing— the entire sky lights up, and even the air around you seems to flash. Obviously, it’s the light being reflected by all those snowflakes, but for that instant, the entire world pulses white… or, if you’re truly lucky, an unearthly purple.  It’s almost a moment of perfect beauty in the dark.

I still remember those flashes of light, soft and terrible and fading so much more slowly than usual, perhaps as the result of a full field-of-vision afterimage, and then the strangely altered roll of thunder.  Can you even imagine what thunder filtered through a snow-muffled sky and landscape sounds like?


Jackals and HYDEsim

Published 18 years, 10 months past

Long-time readers (and Jeremy) probably remember HYDEsim, the big-boom ‘simulator’ I hacked together using the Google Maps API and some information in my personal reading library.

Well, with North Korea setting off something that might have been a nuclear device, it’s starting to show up in the darndest places.  Everyone’s favorite millenial talk show host, Glenn Beck, not only mentioned it on his radio program this past Monday, but also put a link on the main page of his site for a couple of days.  Then it got Farked.  I suppose it’s only a matter of time now before it gets Slashdotted as well.

With the increased attention, some old criticisms have arisen, as well as some misunderstandings.  For example, on Fark, someone said:

I thought it was funny how people are playing with this and think they were “safe” if they weren’t in the circle.

Here’s a mockup I did of the kind of blast damage you could expect from a single 1980’s era Russian ICBM carrying 10 MIRV warheads, each capable of 750KT yield.

Oh my yes.  That’s something that the HYDEsim code can theoretically support, since every detonation point is an object and there’s no limit on the number of objects you can have, but I never managed to add this capability.  That’s because trying to figure out the UI for placing the MIRV impact points broke my head, and when I considered how to set all that in the URI parameters (for direct linking), a tiny wisp of smoke curled out of my left ear.  Still, one of these days I should probably at least add a “MIRV ring impact” option so the young’n’s can get an idea of what had us all scared back in the old days.

The interesting challenge is that a strategic nuclear strike of that variety is going to involve a whole bunch of optimum-altitude air bursts.  HYDEsim takes the simpler—and also, in this darkened day and age, more realistic—approach of calculating the effects of a ground burst.  The difference is in no sense trivial: a ground burst has a lot of energy, both thermal and radiological, absorbed by the ground (oddly enough!).  On the other hand, its highest overpressure distances are actually greater.

This is because shock energy drops with distance, of course.  An optimum-altitude air burst would be a mile or two above the ground, so the highest pressures would be directly beneath the explosion, and would be smaller than if the same weapon exploded on the ground.  With an air burst there’s less ground and man-made clutter to attenuate the shock waves as they spread out, so the total area taking some degree of damage due to overpressure is actually greater.  (There are also very complex interactions between the shock waves in the air and those reflected off the ground, but those are way beyond my ability to simulate in JavaScript.)

Also, direct thermal radiation is spread over a much greater area with an air burst than with a ground burst—again, there’s less stuff in the way.  The amount of fallout depends on the “cleanliness” of the warhead, but for an air burst it can actually be expected to be less than a groundburst.

People also claim that radiological energy (X-rays, neutron radiation, gamma radiation, etc.) will be the deadliest factor of all.  Actually, it’s just the opposite, unless you’re discussing something like a neutron bomb.  The amount of harmful direct-effect radiation that comes directly from the explosion is far, far smaller than the thermal energy.  And yes, I know thermal radiation is direct-effect, but there’s a large practical difference between heat and other forms of radiation.

Put another way, if you’re close enough to an exploding nuclear warhead that the amount of radiation emitted by the explosion would ordinarily kill you, the odds are overwhelmingly high that the amount of shock wave and thermal energy arriving at your position will ensure that there won’t be time for you to worry about the radiation effects.  Or anything else, really.

Remember: I’m talking there about direct radiation, not the EMP or fallout.  That’s a whole separate problem, and one HYDEsim doesn’t address, to the apparent disgust of another Farker:

The site is useless without fallout and thermal damage.

Well, I don’t know about useless, but it’s admittedly not as representative of the totality of nuclear-weapons damage as it might otherwise be.  Of course, HYDEsim is not specifically about nuclear detonations, as I showed when I mapped the Hertfordshire oil refinery explosion and djsunkid mapped the Halifax explosion of 1917.  But I certainly admit that the vast majority of explosions in the range the tool covers are going to be from nuclear weapons.

The problem with mapping fallout is that it’s kind of weather dependent, just for starters; just a few miles-per-hour difference in wind speed can drastically alter the fallout pattern, and the position of the jet stream plays a role too.  Also, the amount of fallout is dependent on the kind of detonation—anyone who was paying attention during the Cold War will remember the difference between “dirty” and “clean” nuclear warheads.  (For those of you who came late: to get a “dirty” warhead, you configure a device to reduce the explosive power but generate a lot more fallout.)

Thermal effects are something I should add, but it’s trickier than you might expect.  There’s actually an area around the explosion where there are no fires, because the shock effects snuff them out.  Beyond that, there’s a ring of fire (cue Johnny Cash).  So it’s not nearly as simple as charting overpressure, which is itself not totally simple.

And then there’s there whole “how to combine thermal-effect and overpressure rings in a way that doesn’t become totally confusing” problem.  Get ambitious, and then you have the “plus the show fallout plume without making everything a total muddle” follow-on problem.  Ah well, life’s empty without a challenge, right?

Okay, so I went through all that and didn’t actually get to my point, which is this:  I’ve been rather fascinated to see how the tool gets used.  When it was first published, there was a very high percentage of the audience who just went, “Cooool!”.  That’s still the case.  It’s the same thing that draws eyes to a traffic accident; it’s horrible, but we still want to see.

However, I also got some pushback from conservative types:  how dare I publish such a thing, when it could only be useful to terrorists?!?!?  Rather than play to the audience and inform them that I simply hate freedom, I mentioned that it was desirable to have people like you and me better understand the threats we face.  It’s not like the terrorists can’t figure this stuff out anyway.

Now I’ve seen a bunch of people from the same ideological camp use HYDEsim to mock the North Koreans’ test, which apparently misfired and only achieved a yield of about 0.5KT.  Others have taken that figure and plotted it in American cities, giving some scale to the dimension of this particular threat.  Still others have done that, but with the yield the North Koreans had attempted to reach (thought to be 4KT), or even with yields up to 50KT.  In most cases, these last are shown in conjunction with commentary to the effect of “now do you understand why this is a problem?”.

This is why I do what I do, whether it’s write books or publish articles or speak at conferences or build tools or just post entries here:  to help people learn more about their world, and to help them share what they know and think and believe with others.  Sometimes that’s worth saying again, if only to remind myself.


Hospitality

Published 18 years, 10 months past

Carolyn’s been eating a lot of ice cream and watching a lot of videos the past few days, and we’re sort of concerned that she’s going to get entirely too used to both.

This is all happening because on Thursday, she had her tonsils and adenoids surgically removed.  I imagine that it’s never easy for a parent to have a child go into an operating room, but it seems like there’s something extra difficult when it’s a little girl who’s not yet three.  I know that much younger children go into operating rooms every day; my sister underwent her first operation at the age of six months.  As I grew up, visiting hospitals became a regular feature of my life, and I have little fear of hospitals or doctors to this day.  Needles, yes.  Those terrify me.  But not hospitals.

It’s just as well, because last Tuesday, I ended up in the emergency room with a broken big toe.  This was the result of an unfortunate interaction between my foot and the island in our kitchen, and at first I didn’t even think it was serious.  There wasn’t much pain, no swelling or discoloration, and I could still move my toe just fine.  One of the lessons I learned as a child is, “If you can move it, then it must not be broken”.  Turns out that’s wildly incorrect.  It’s entirely possible to move a broken appendage and not even have it hurt that much.  At first.  Eventually, though, the toe stiffens up and it starts to hurt like there’s no tomorrow.

So I went on crutches two days before my daughter went in for surgery, less than a week after Kat came off crutches, which she’d been issued after breaking an ankle a few weeks back.  She’s still wearing an Aircast most of the time.  It’s been a laugh a minute in our house, let me tell you.  (Though I must admit I’m jealous of her Aircast.  It totally looks like a jet-boot from Star Trek, right down to having what look like little reaction boosters on the back.)

So now Kat and I are hobbling around, whereas Carolyn is just about back to normal.  In fact, she was running around laughing, singing, and playing pool within a few hours of the surgery.  We figured we’d have to go back to signing with her while her throat healed, but nope, no need.  The original plan was to keep her in the hospital overnight for observation, but about six hours after surgery, the doctor told us to go home.  They’d never seen anything like it, they said, and especially not in a child so young.  Sometimes I think she just might be a superhero-in-waiting, kind of like the invincible teenager on Heroes, most of which I watched on the emergency room’s TV while waiting to have my foot examined.

I suppose most every parent thinks their kid is super, but seriously, she’s an ironclad trooper.  In a weird way, I’m inordinately proud of her, which is kind of like being proud of her for having brown hair, but there it is anyway.  I fervently hope she rebounds just as powerfully and positively from all life’s injuries.

Anyway, given that she’s technically in recovery and we’d already planned for cold soft foods and lots of videos, we just went with the plan.  Now we’re all caught up on recent episodes of The Backyardigans and have been through most of her Signing Time videos (her choice!), and are starting to think about how to wean her back to one show every third day or so.  We’re currently hoping that going back to pre-school does the trick.  Wish us luck.


W3C Change: Your Turn!

Published 18 years, 10 months past

So recently, I shared a number of ideas for improving the W3C, the last of which (posted a week ago) was to transition from a member-funded organization to a fully independent foundation of sorts, one that was funded by the interest earned by an endowment fund.  Surprisingly, there seemed to be little objection to the idea.  That was the one thing that I figured would get some pushback, mainly due to the magnitude of the change involved.  I’m still interested in hearing any counter-arguments to that one, if somebody’s got ’em (thought they’d be best registered on that particular post, and not here).

The other thing I was expecting to see, but didn’t, was other people’s ideas for improvements to the W3C.  That was probably my fault, given the way I wrote the posts, which now that I look at them were set up more as soliloquies than the beginnings of a discussion.  While I think my ideas are good ones (of course!), I’m only one person, and I very much doubt I’ve thought of everything.

So what are your thoughts for improving the W3C’s effectiveness and standing in the field?


W3C Change: Full Independence

Published 18 years, 10 months past

Apologies for the break in posting just as I was getting to the best part of the W3C Change series, but back-to-back trips to Seattle and Dallas came up before I could finish writing up my thoughts.  This one was, for all the simplicity of the content, the hardest one to write, because I kept revising it to try to be more clear about what I’m proposing and how it would be an improvement.  I could keep revising ’til the end of forever, so I’m just going to take what I have now and go with it.

My third recommendation is simply this: Transform the W3C from a member-funded organization to a financially independent entity.

In order to accomplish this, the W3C would need to embark on a major capital campaign, similar to the efforts mounted by major non-profit organizations and American private universities.  The campaign parameters that come to mind are a ten-year campaign whose goal is to build an endowment of $200 million.  From the interest on this endowment—which at a relatively modest 5% return would be $10 million annually—the W3C could fund its activities.

(Note: I do not have access to the budget of the W3C, but with approximately 70 staff members at an average total cost of $125,000 per year in salary, benefits, and travel expenses, the staffing cost would be $8.75 million.  If I am lowballing the budget, then obviously the capital campaign’s goal would have to be raised.  The general approach remains the same.)

As the campaign progressed, the membership dues would be reduced across the board in proportion to the progress of the campaign.  Once the campaign reached its end and the full endowment had been acquired, the dues would fall to zero and the membership model would be dismantled.

You might wonder where the blinking font the W3C could get that kind of money, even over the course of a decade.  Well, 20 Internet billionaires could each donate $10 million in thanks for the W3C making their fortunes possible, and there you go.  Even if that doesn’t happen, there are many foundations whose goal is to foster better technology and communications, and who might be persuaded to contribute.  Government grants could help.  And, of course, a supporter campaign like that run by the EFF would allow individual developers to add their support.

Frankly, I don’t think the problem would be finding the money, especially over a ten-year period.  By hiring an experienced fund-raiser, I think the funds could be raised a good deal more quickly.  I think this would be especially true if Sir Tim publicly put his weight behind the effort, and made personal appeals to potential major donors.

But why would I even suggest such a thing?

  1. The current membership model creates an apparent wall between the W3C and the rest of us.  Because it costs a minimum of $15,000 over three years to become a W3C Member, individuals will rarely, if ever, be able to justify membership.  The same is true of web design and development shops.

    For primarily this reason, there is the belief that non-paying members of the community cannot join Working Groups, and that the WGs are forever closed to the rest of the world.  This is not really true, since any Working Group can ask people in the community to become Invited Experts.  These are Working Group members who don’t have to pay to get in, and aren’t necessarily held to the same contribution standards as Member representatives.  (Not that contribution standards are always upheld for them either, as I observed in an earlier post.)

    So now imagine a W3C where there are no Members.  That means that every Working Group is comprised entirely of Invited Experts (except for any W3C staff members who might join).  This bridges the perceived gap, and puts community members on a more equal footing with those who would currently be Member representatives.  I’m not saying there wouldn’t be company representatives at all.  The CSS WG is going to have representatives from Microsoft, Mozilla, Apple, and so on.  The alternative is for them to not participate, and thus be at the mercy of what happens in their absence.

    Since someone’s going to bring it up, I’ll address the Microsoft question.  You might think that Microsoft could decide to both abandon, say, the CSS WG and ignore what it produces.  (Anyone could do this, but Microsoft is going to be the company accused of hypothetically plotting such a thing.)  That could well be.  But wouldn’t Microsoft departing the CSS WG be a large red flag that something’s seriously wrong, and that it needs to be addressed before worrying about exactly how the layout module is written?

    Of course, some other player could do this as easily as Microsoft.  The point is really that, if a major player in the space with which the WG is concerned departs that WG, then it identifies a situation that needs to be addressed.  The Member model actually goes some small way toward concealing that, because the dues paid create a certain impetus to put someone on a WG, even if there’s no serious interest.

    The flip side of this is the question, which I’ve heard more than once from people when I talk about this idea, “How would a WG force the players to the table?”  For example, how could a new browser-technology WG force the browser makers to join the group?

    The question itself betrays a fallacious assumption: that players should be forced to work together.  If you propose to form a WG that doesn’t interest one or more of the major players in the field, then the WG may well be flawed from the start.  The point of a WG is to produce an interoperable standard.  If a WG just goes off and does something without buy-in from players, and there’s never an implementation, then the whole effort was wasted.  On the other hand, a specification that was produced with the involvement of all the major players stands a much better chance of being implemented, and thus a much better chance of being used and appreciated by the community.

    The flip side of that flip side is the question, “What if a WG refuses to admit a player in the field?”  In other words, what if the CSS WG barred Microsoft from having a representative on the WG?  Again, that would be an enormous red flag that something had gone awry.  Any WG that refused to involve an important player in their field would need to be scrutinized, and probably reformatted.

    All this does raise the spectre of replacing a centralized model with a consensus model.  Which is just fine with me, for all the reasons I just mentioned.

  2. There is the perception—largely untrue, but no less persistent—that the W3C is controlled by those who fund it.

    It’s actually been my experience that there’s an inverse correlation between the amount of money a company puts into the W3C and the frequency with which their representatives get their way.  During my time in the CSS WG, the Microsoft people faced more resistance and more grief from the rest of the WG than the Netscape reps ever dreamed of getting.  CSS-like things which IE/Win had done faced a serious uphill battle to be incorporated in the specification, even when they were good ideas.  I don’t know how to explain this odd variance from the usual effect of money, but it was there.  Maybe in other WGs, the situation is different, although I kind of doubt it.

    But as I say, the perception is persistent.  A financially independent W3C would remove that perception.  I wouldn’t propose this kind of funding-model change solely to clear up some erroneous perceptions, but it’s an undeniably positive side effect.

  3. Full financial independence allows the W3C to do things that its dues-paying Members likely wouldn’t permit.

    Now what could I be talking about, since I just claimed that dues money doesn’t drive what the W3C does, except in inverse ways?  What I’m talking about is things like launching a program to pay Invited Experts a small stipend.  Currently, Invited Experts receive no financial support, whereas Member representatives are supported by their employers while devoting some of their time to the W3C.  I tried to imagine a world where the dues-paying Members of the W3C approved the idea of paying Experts, and although I managed to do so, it turned out to be entirely populated by talking kawaii unicorns who get joyfully teary about their perpetually rainbow-filled skies and giggle a lot.

    Here’s another W3C effort which probably could never get funded under the current model:  a university scholarship for students who plan to study the web, or uses of the web.  They might fund independent research on the effects of the web in developing countries, or what users want, or any number of other things.  Or hey, how about putting enough money into the WWW conference series that people who present papers are given a complimentary registration?  (I know—radical!)

    These things couldn’t happen if the W3C’s endowment generated only enough interest to cover staffing and overhead, but the endowment doesn’t have to be limited to just that much.  A second capital campaign, or a simple continuation of the first one, could increase the endowment, thus giving the W3C (potentially) quite a bit of discretionary funding.  It would give them the opportunity to spend money on efforts that advance their core mission (“To lead the World Wide Web to its full potential by developing protocols and guidelines that ensure long-term growth for the Web”).

There are various knock-on effects that arise from those points, of course, but I’ve gone on long enough.

As many of you have noticed, I’m effectively proposing that the W3C become a foundation instead of a consortium, albeit a foundation whose primary mission is to act as a consortium would.  I’ve avoided using terms like “non-profit” and “not-for-profit” because they might imply specific things which I don’t fully intend in terms of tax law, or whatever, but I do think of it as a generically non-profit institution; that is, one that does not strive to create a profit, except as can be invested into the endowment.

I’ve tried to explain why I believe this is a good idea, but in the end, I think the most fundamental reason is that one I can’t explain:  it just feels like the right thing to do.  It’s like I can perceive a shape without grasping all its details, but the overall shape looks right, looks better.

I fully expect that some will recoil from this idea, convinced that a foundation is a poor substitute for a consortium.  Obviously, I disagree.  I think the W3C’s future could be made much more stable with this approach, especially in financial terms.  I also believe, as I said before, that it would be no less of a force for the advancement of the web.  In fact, I think it would be a much stronger force, and have a greater positive effect, over the long term.

It is not a small undertaking, but it is an important and worthwhile effort, and I hope it is one the W3C considers seriously.


W3C Change: Working Groups

Published 18 years, 11 months past

The second area where I think the W3C could be improved is in how Working Groups are populated and managed.  To a large extent, what I propose is just a re-commitment to existing rules, and isn’t particularly radical.  That doesn’t make them any less important, of course.  Furthermore, this area of discussion doesn’t boil down to one talking point; rather, it boils down to three.

First is this: participants in a Working Group should be productive, or else leave the group, whether voluntarily or otherwise.

This is really already part of the rules, but it’s not very well enforced, in my experience.  I mean that personally, too: between mid-2003 and mid-2004, I contributed almost nothing to the CSS WG.  I didn’t even phone in for teleconferences, let alone contribute to specifications.  Now, as an Invited Expert, the participation rules aren’t quite the same for me as they are for Member representatives, but by any measure, I was deadweight.  I was only on the WG membership list out of inertia.

When the WG’s charter came up for renewal in 2004, the chair asked me if I wanted to stay in the group and start contributing again.  After some reflection, I said no, because I wasn’t going to magically have more time and energy to give to the WG.  To stay would have been dishonest at best, so I left.

Honestly, though, he should have asked me the same question (and been a little more pointed about it) six months previously.  WG chairs should do the same for any member who falls silent.  The actual reasons for the silence don’t matter, because having a WG member step down isn’t a permanent excommunication.  It’s simply an acknowledgment that the person is too busy to be a contributing member, and so leaves the group, whether temporarily or for good.

Ideally, people would voluntarily do this upon recognizing their lack of participation, but not everyone would.  I didn’t, until I was prompted.  WG chairs should prompt when necessary, and even be empowered to place someone on inactive status if they don’t contribute but refuse to step down.  Again, this isn’t a permanent decision, and it isn’t punishment.  It’s just keeping the WG membership list aligned with those who are actually contributing.

This brings me to the second point, related very closely to the first: Working Groups should have a minimum membership requirement.

If a WG doesn’t have enough members to operate, then it needs to be mothballed.  Simple as that.  If you had ten WG members and eight of them went silent, leaving you with only two active members, then it’s time to close up shop for a while.  No WG would ever be permanently shuttered this way:  it would simply be placed on “inactive” status.  Once enough people committed to being contributing WG members, it could be re-activated.  Granted, this would require a re-chartering and all the other things necessary during that process.

I also have to figure that if a WG was in danger of going inactive, some of the group’s members would get involved again.  If not, word would spread and community members would step up to offer their help.  And if none of that happened, then it would be a pretty strong indication that the WG did need to be shut down, for general lack of interest.

Of course, all this requires a WG chair who is willing to hold people’s feet to the fire, to cut inactive members, and to shut down his own WG if there aren’t enough active participants.  But then WG chairs are already required to do a lot of things, and not all of them get done.  Some are trivial; some are not.

The biggest obstacle a WG can face is its own chair, if said chair is abrasive or obstructionist or just plain out of touch.  As things stand, the only way to lodge a complaint against a chair is by working your way up the chain of command at the W3C.  That’s a pretty flat set of rather short chains, though.  In many cases, it doesn’t take a whole lot of steps to reach Sir Tim himself.  And there are even cases where WG chairs are their own bosses, hierarchically speaking, which makes it hard to effectively lodge complaints.

Thus we come to my third suggestion: there needs to be a “vote of no confidence” mechanism for WG chairs.

This is nothing more than a vote by the members of a Working Group:  do we keep our chair, or should he step down?  In this way, the WG itself can decide when it’s time for a leader to go.  I get a little wobbly over the actual vote threshold: should a chair be removed if half the WG votes against him, or two-thirds?  Tough call.  Probably a majority, on the theory that any WG with that many people opposed to the chair is already in deep trouble.

I’m also unable to decide whether I’d have these votes happen automatically, on a set schedule—say, every year right before the March Technical Plenary—or only when a member of the WG calls for one.  Both approaches have pros and cons.  I think my slight preference is for the set schedule, but on the other hand, requiring a member of the WG to call for a “no confidence” vote would be useful, in that the mere call for a vote would serve as its own indication of trouble in a WG, regardless of the vote’s outcome.

So that’s how I’d reform WG membership and leadership:  participants need to be active; WGs need a minimum membership to continue; and WGs should be able to remove their own chairs when necessary.


W3C Change: Outreach

Published 18 years, 11 months past

My first suggestion for improving the W3C is this:  every Working Group should have one member whose primary (and possibly sole) responsibility is outreach.

To make life a little easier, I’m going to refer to this position as a WGO (for Working Group Outreach).  As an aside, I’m not sure that “outreach” is exactly the right term for what I have in mind, but it’s a decent term that captures most of what I have in mind, so I’ll use it here.  If someone comes up with a better term, I’ll be grateful.

So here’s what I envision for a WGO.

  1. The WGO keeps the public informed about the top issues on the Working Group’s agenda and immediate-future activities.  The easiest, most obvious way to do this is to post a summary of every WG FTF (face-to-face) meeting.  A summary would describe the topics the WG discussed, resolutions that were reached, which problems were not solved, and so forth.  This could be a bullet-point list, but a better summary would be something like a short article.

    Note that I do not say that the WGO should post the FTF minutes, which are often private.  The results of those discussions, though, should be public, even when no results occurred.  A summary can say that the WG discussed a topic at length and reached no resolution without saying why.  It can also say that a topic was discussed and a solution found, and then describe the solution.

    A really good WGO would produce an activity summary more often than every FTF.  I don’t know that I’d insist on a summary for every weekly teleconference, but sending out a summary once a month would be more than reasonable.  These summaries would be posted on the W3C site and to the relevant public mailing lists.  For the CSS WGO, this would always mean posting to www-style.  In cases where WG activity touched on features of XHTML or SVG, summary posts would be made to those public lists as well.

    The purpose here is to draw back some of the curtain surrounding Working Groups.  Too often, interested members of the public don’t know what the WG is up to, and that can be frustrating.  If there are several people agitating for a new feature and the WG stays silent on it, it’s impossible to tell if the WG is blowing the idea off or if it’s something they’ve considered at length but haven’t yet reached a decision.

    Public summaries also have the benefit of allowing some public discussion of work before the public-comment period on a proposed specification.  This would help distribute the WG’s feedback load.

  2. The WGO brings the needs and concerns of the public to the Working Group, and communicates back the WG’s reactions.  This means part of the WGO’s job is to be involved in the wider community surrounding a given activity.  The CSS WGO, for example, would spend time reading web design mailing lists, forums, blogs, and so forth to find out what people in the field want and need (in CSS terms, anyway).  The WGO would present these to the WG as items to consider.  The topics so raised, and the WG’s responses to them, would go into the next summary.

    The goal here, of course, is to have someone on the Working Group who represents the “in the trenches” folks.  If there are other members of the WG who also represent those who work in the field, that’s awesome.  With the WGO position, though, there’s the assurance of at least one person who speaks for those who actually use the products of the Working Group, and who will use any future products.

  3. The presence of a WGO in a Working Group should be a charter condition.  No group should be (re-)chartered without an identified WGO, and the extended lack of a WGO should be cause to question the continued charter of a group.

    Basically, I’m of the opinion that if a WG can’t find someone passionate enough about what they’re doing to be the WGO, then it’s time to ask whether or not they should continue at all.  Similarly, if there’s no real community for the WGO to represent, then it’s time to ask why the WG even exists.

  4. The WGO should have no other major responsibilites within the Working Group.  This means the WGO cannot be the WG’s chair, and should not be a specification editor.  Their primary job should be the two-way representation I’ve described here.

    It’s too easy to get overloaded in a WG, especially if you’re the kind of enthusiast a good WGO should be.  There needs to be a defined limit to the position, so that outreach is always topmost on that person’s agenda within the WG, and it doesn’t get buried under other duties.

In summary, a good WGO would act as a liason between the Working Group and the community surrounding it.  A great WGO would do all that and also produce information that helps expand that community.  They could publish quick how-to’s, for example, concentrating on either current or near-future specifications.

If you could, please allow me to illustrate my points with a few things that a CSS WGO might do in the course of their duties.  I’ll call this CSS WGO “Bob” to make the example less clumsy.

Recently, Bob’s been seeing a lot of calls on blogs for an “ancestor” selector.  This would be something that lets you say, “style this element based on its descendants”, such as styling all links that contain an image without having to class them.  (This idea has come up many times in the past, by the way, but has yet to be added to CSS.)  So Bob brings the “ancestor selector” subject to the WG.  The WG says, “Yes, that’s a very good idea, but it runs aground on the following problems.”  Bob would then put all that into his next summary: “The WG is in favor of adding the ancestor selector, but the following problems prevent its inclusion…”  Bob could certainly also communicate the response directly, through mailing lists or blogs, instead of just putting the response in the summary.  The latter is necessary, of course, but doing both is better.

How is this better?  Because the community knows the WG has considered the idea, where the WG stands on the idea, and the reasons why it hasn’t been accepted.  Everyone knows where the sticking points lie, and can make suggestions to overcome them, instead of just guessing as to why the requested feature hasn’t been adopted.  As for the reasons, they could be anything from “that’s demonstrably impossible in an entropic universe” to “not enough implementors have committed to doing it”.  As long as we know what the roadblock is, we can act accordingly.

Furthermore, Bob might accompany a new version of the Advanced Layout module with a quick how-to article that describes how to do a certain common layout, one that’s very hard to do in current CSS, with the stuff in the new module.  This provides a quick, “wow cool!” introduction to the WG’s efforts, which can energize the community and also draw in new people.

I will readily grant that many WGs have what are effectively unofficial WGOs; in a lot of ways, you could argue that I’ve been a WGO for years, as have several other people, through books and articles and forum participation and blogging and so on.  That’s not enough.  There needs to be someone inside the Working Group who is focused on explaining to the world what the WG is doing and who is explaining to the WG what the world is doing, or at least trying to do.

So that’s the first of my three major suggestions for reforming the W3C: an outreach person for every Working Group.


W3C Change: Introduction

Published 18 years, 11 months past

When I posted about the W3C, a few people responded with, “All right, fine, you’re angry with the W3C.  So what’s your alternative, smart guy?”  A fair enough question.

While I applaud the efforts of the WHAT WG and the microformats community, I’m not advocating a complete dismissal of the W3C.  The basic role filled by the W3C, that of being a central meeting place and coordinating body, is an important one.  It’s also potentially damaging.  Think of it like a central file server at work.  As long as the server is fine, your work can continue.  If it goes offline or, worse, its contents get corrupted, you’re in a very bad position.

When I point to the WHAT WG and microformats, I’m not holding them up as saviors or replacements.  I’m simply drawing attention to effects of the basic problem.  Both communities arose because of the nature and (lack of) speed of the W3C and its work.  We could argue about whether or not they should replace the W3C, but the simple fact is that had the W3C been more responsive and in touch with developer needs, they would never have existed in the first place.  They wouldn’t have had to exist.

If the W3C can get back on track, I wouldn’t want to see it replaced.  If it can’t, then it will be replaced, no matter what I or anyone else has to say.  That doesn’t mean it would cease to exist, of course.  It would simply become less and less relevant.  I have some ideas about how the W3C might avoid such a fate, but they aren’t things that I can cover in a single post.  Instead, I’ll do it in three parts, and the three topic areas I’m going to address are:

No small potatoes, those.  It will be interesting to find out what people think of my proposals for each.


Browse the Archive

Earlier Entries

Later Entries