Thoughts From Eric Archive

Being Professionals

Published 18 years, 10 months past

Looks like the idea of a professional organization for web designers is back in the feeds.  Mark Boulton, after listening to the Hot Topics panel from @media 2006, had quite a bit to say about the idea.  Richard Rutter followed up with thoughts of his own, and then D. Keith Robinson chimed in.  There are probably more posts out there by more people, because this is one of those topics that just spreads like a virus, infecting host after host with a copy of itself.  (If you have one, feel free to drop a link in the comments.)

Since Mark started things off by mentioning my comments about education being behind the times (but didn’t actually link to me like he did everyone else; where’s the love, Mark?), I’ll start there.  I still hold that certification is much too premature for our field.  Even if we could wave a wand and create a good set of certification criteria in the next week, it would be out of date within a year.  Anything that wouldn’t go out of date that quickly would be so basic as to make a mockery of the whole idea of certifying someone as competent in the field.

I’ll concede that if a relatively well-funded organization took on the task of creating and (more crucially) keeping up to date the criteria, they could be kept useful.  Hey, maybe an independent W3C!  Well, it’s a thought.

The deeper problem is in deciding what constitutes professional competence.  Does using AJAX get you bonus points, or automatically disqualified?  Does absolutely everything a developer produces have to validate, even if that breaks layout or interactive features in one or more browsers?  Web design isn’t like chemistry, where the precipitate either forms or it doesn’t.  If chemical engineers had to work in conditions equivalent to web developers, they’d have to mix their solutions in several parallel universes, each one with different physical constants, and get the same result in all of them.

Richard’s take is that certification could be based on relevant education and cross-discipline experience.  Well, that leaves me out: my degree in History isn’t likely to be considered relevant.  Then again, I’m not actually a web designer, so maybe Richard’s organization isn’t for me.  I might be considered a developer, but on the other hand, maybe I’m just a technology writer and need to go apply for membership in their club.

Richard’s approach doesn’t really seem to make the “what qualifies” problem go away so much as it abstract it into a non-issue.  You just have to have experience in a discipline.  Nobody says it has to be particularly good or bad—though evaluating that would, apparently, be up to the peers who review your application.  This introduces an interesting subjective element, one that I think may feel foreign to those of us who like to work with computers.  In any organization composed of humans, of course, you’re not going to get away from subjectivity.

In all this, though, the people who are interested in creating a professionals’ organization will have to answer a fairly tough question.  Given that both the World Organization of Webmasters and HTML Writers Guild already exist and offer certification, why aren’t they more widely known or highly regarded, and how will any proposed organization do better?  What will make it better or more influential?

Of everyone, I think Keith’s got the best idea with his proposed professionals’ network.  It’s probably game-able, but heck, so is entrance into a professional society.  I know I’d be very interested in participating in such a network, especially one that let people indicate who they’ve worked with, and on what.  Analyzing those link patterns could be endlessly fascinating.  If it includes community features similar to those of the original MeetUp, thus encouraging physical meetings of members, as well as the endorsement and networking features of LinkedIn, I’d be there in a hot second.

So… who wants to start forming the team to make that network come alive?


High-Profile Cooking

Published 18 years, 11 months past

Kat and I were watching “Good Eats” the other night, and as Alton slid a dish into a nice toasty warm 350-degree oven, I suddenly sat bolt upright.

“Hey, that’s our oven!” I blurted out.

Kat and I (okay, mostly Kat) recently decided that enough was enough, and that our old oven had to go.  It was a Jenn-Air that came with the house, and frankly, it was either not very good in the first place or else had just been beat all to hell.  Cramped, dark, and uncalibrated—and with an unreadably worn set of control dials to boot—it was time for the warhorse to go.

After a good deal of research, Kat settled on a GE JK955 electric double oven, which we were relieved to find fit almost exactly into the space where the old oven was, once we removed a couple of drawers.  It’s got all kinds of toys and features that would send any food-porn addict straight into overdrive, including a built-in probe thermometer.  It even has a nice warm proofing function, which is one of the reasons Kat picked it.

There is one thing about it that cracks me right up, and that’s the Sabbath mode.  Seriously.  When you put it into Sabbath mode (the display reads “SAb bATh” when you do so), it will help you observe Orthodox Jewish law as regards the Sabbath.  Really!  See, you’re not allowed to do any work on the Sabbath, which includes things like turning lights on and off.  Ovens fall under that restriction as well, which makes cooking dinner a bit tough.  However—and here’s the funky part—you get off the hook if you don’t directly cause the work to occur.  If the work happens indirectly, then you’re okay.

So when the oven is in Sabbath mode, you input the temperature and cook time you want.  Then you press start, and for a random amount of time that ranges from 30 seconds to a minute, nothing happens.  Then the oven kicks on.  Ta-daaa!  Indirect action!  Sure, you pressed all those buttons, but the random time delay is enough to get around your religion’s restrictions on Sabbath work.  It’s all, pardon the term, kosher.  Check out the Wired article about the man responsible for Sabbath mode, if you don’t believe me.

I’m still trying to decide if this letter-of-the-law approach lessens my respect for Orthodox Jews’ conception of religion, or if I have more respect for their pragmatic willingness to hack the problem.  I think it’s the latter.  Apparently there’s still no progress on a molecular screen that will prevent the insertion of porcine products into the oven, so I guess some things are still up to the individual.

So not only do we have a frum oven, but without realizing it we had settled on the same model that A.B. himself uses, which is about as weighty an endorsement as we can imagine.  (Of course, his is the larger unit, but that’s okay—ours fills its space very nicely, thank you.)  The degree to which this makes us feel all smug and superior is probably cause for alarm.  If you hear our friends are getting ready to stage an intervention, well, that’s probably why.


It’s Beginning To Snow

Published 18 years, 11 months past

So yesterday I was going to post about getting our first snowfall of the year, but Buffalo’s kind of stolen those bragging rights.  I know how they feel: almost ten years ago, Cleveland got hit with one hell of an early November storm.  On a Thursday afternoon, it was 70 degrees Fahrenheit when a cold front slammed into the city, spawning three tornadoes and dropping the temperature to the freezing point in the space of about three hours.  The winds off the lake brought sleet, then snow… four days of snow.

From the e-mail I sent to my “friends” list a few days later:

Things really picked up Saturday afternoon and evening, as I discovered when I made the mistake of trying to return to Cleveland that night— and let me tell you, the money I paid for anti-lock brakes and traction control was worth it, ten times over.  I’d probably be dead or badly injured right now if it weren’t for one or the other of those systems.  It was BAD out there.

By Tuesday, the snow depth in the University Circle area was roughly two feet— that’s average depth, not drift depth— and we’re not even in the Snow Belt.  I understand they have about twice the amount of snow, but I haven’t ventured east to find out.  To make things worse, this is heavy, wet, break-your-back-trying-to-shovel-it snow.  Anyway, there are a lot of trees which the snow has simply snapped in half— and they still have their fall colors.  I saw a maple tree the other day with brilliant red leaves peeking through a heavy blanket of snow.  Weird.  But very pretty, and more than a little fascinating.

[The snowstorms] were also thunderstorms.  I’ve seen an occasional, rare flash of lightning during a heavy snowstorm maybe five other times in my life.  In the course of one evening, I saw the sky light up twice that many times, and witnessed cloud-to-cloud lightning over Lake Erie, all while snow fell.

Lightning during a heavy snowstorm is an eerie thing— the entire sky lights up, and even the air around you seems to flash. Obviously, it’s the light being reflected by all those snowflakes, but for that instant, the entire world pulses white… or, if you’re truly lucky, an unearthly purple.  It’s almost a moment of perfect beauty in the dark.

I still remember those flashes of light, soft and terrible and fading so much more slowly than usual, perhaps as the result of a full field-of-vision afterimage, and then the strangely altered roll of thunder.  Can you even imagine what thunder filtered through a snow-muffled sky and landscape sounds like?


Jackals and HYDEsim

Published 18 years, 11 months past

Long-time readers (and Jeremy) probably remember HYDEsim, the big-boom ‘simulator’ I hacked together using the Google Maps API and some information in my personal reading library.

Well, with North Korea setting off something that might have been a nuclear device, it’s starting to show up in the darndest places.  Everyone’s favorite millenial talk show host, Glenn Beck, not only mentioned it on his radio program this past Monday, but also put a link on the main page of his site for a couple of days.  Then it got Farked.  I suppose it’s only a matter of time now before it gets Slashdotted as well.

With the increased attention, some old criticisms have arisen, as well as some misunderstandings.  For example, on Fark, someone said:

I thought it was funny how people are playing with this and think they were “safe” if they weren’t in the circle.

Here’s a mockup I did of the kind of blast damage you could expect from a single 1980’s era Russian ICBM carrying 10 MIRV warheads, each capable of 750KT yield.

Oh my yes.  That’s something that the HYDEsim code can theoretically support, since every detonation point is an object and there’s no limit on the number of objects you can have, but I never managed to add this capability.  That’s because trying to figure out the UI for placing the MIRV impact points broke my head, and when I considered how to set all that in the URI parameters (for direct linking), a tiny wisp of smoke curled out of my left ear.  Still, one of these days I should probably at least add a “MIRV ring impact” option so the young’n’s can get an idea of what had us all scared back in the old days.

The interesting challenge is that a strategic nuclear strike of that variety is going to involve a whole bunch of optimum-altitude air bursts.  HYDEsim takes the simpler—and also, in this darkened day and age, more realistic—approach of calculating the effects of a ground burst.  The difference is in no sense trivial: a ground burst has a lot of energy, both thermal and radiological, absorbed by the ground (oddly enough!).  On the other hand, its highest overpressure distances are actually greater.

This is because shock energy drops with distance, of course.  An optimum-altitude air burst would be a mile or two above the ground, so the highest pressures would be directly beneath the explosion, and would be smaller than if the same weapon exploded on the ground.  With an air burst there’s less ground and man-made clutter to attenuate the shock waves as they spread out, so the total area taking some degree of damage due to overpressure is actually greater.  (There are also very complex interactions between the shock waves in the air and those reflected off the ground, but those are way beyond my ability to simulate in JavaScript.)

Also, direct thermal radiation is spread over a much greater area with an air burst than with a ground burst—again, there’s less stuff in the way.  The amount of fallout depends on the “cleanliness” of the warhead, but for an air burst it can actually be expected to be less than a groundburst.

People also claim that radiological energy (X-rays, neutron radiation, gamma radiation, etc.) will be the deadliest factor of all.  Actually, it’s just the opposite, unless you’re discussing something like a neutron bomb.  The amount of harmful direct-effect radiation that comes directly from the explosion is far, far smaller than the thermal energy.  And yes, I know thermal radiation is direct-effect, but there’s a large practical difference between heat and other forms of radiation.

Put another way, if you’re close enough to an exploding nuclear warhead that the amount of radiation emitted by the explosion would ordinarily kill you, the odds are overwhelmingly high that the amount of shock wave and thermal energy arriving at your position will ensure that there won’t be time for you to worry about the radiation effects.  Or anything else, really.

Remember: I’m talking there about direct radiation, not the EMP or fallout.  That’s a whole separate problem, and one HYDEsim doesn’t address, to the apparent disgust of another Farker:

The site is useless without fallout and thermal damage.

Well, I don’t know about useless, but it’s admittedly not as representative of the totality of nuclear-weapons damage as it might otherwise be.  Of course, HYDEsim is not specifically about nuclear detonations, as I showed when I mapped the Hertfordshire oil refinery explosion and djsunkid mapped the Halifax explosion of 1917.  But I certainly admit that the vast majority of explosions in the range the tool covers are going to be from nuclear weapons.

The problem with mapping fallout is that it’s kind of weather dependent, just for starters; just a few miles-per-hour difference in wind speed can drastically alter the fallout pattern, and the position of the jet stream plays a role too.  Also, the amount of fallout is dependent on the kind of detonation—anyone who was paying attention during the Cold War will remember the difference between “dirty” and “clean” nuclear warheads.  (For those of you who came late: to get a “dirty” warhead, you configure a device to reduce the explosive power but generate a lot more fallout.)

Thermal effects are something I should add, but it’s trickier than you might expect.  There’s actually an area around the explosion where there are no fires, because the shock effects snuff them out.  Beyond that, there’s a ring of fire (cue Johnny Cash).  So it’s not nearly as simple as charting overpressure, which is itself not totally simple.

And then there’s there whole “how to combine thermal-effect and overpressure rings in a way that doesn’t become totally confusing” problem.  Get ambitious, and then you have the “plus the show fallout plume without making everything a total muddle” follow-on problem.  Ah well, life’s empty without a challenge, right?

Okay, so I went through all that and didn’t actually get to my point, which is this:  I’ve been rather fascinated to see how the tool gets used.  When it was first published, there was a very high percentage of the audience who just went, “Cooool!”.  That’s still the case.  It’s the same thing that draws eyes to a traffic accident; it’s horrible, but we still want to see.

However, I also got some pushback from conservative types:  how dare I publish such a thing, when it could only be useful to terrorists?!?!?  Rather than play to the audience and inform them that I simply hate freedom, I mentioned that it was desirable to have people like you and me better understand the threats we face.  It’s not like the terrorists can’t figure this stuff out anyway.

Now I’ve seen a bunch of people from the same ideological camp use HYDEsim to mock the North Koreans’ test, which apparently misfired and only achieved a yield of about 0.5KT.  Others have taken that figure and plotted it in American cities, giving some scale to the dimension of this particular threat.  Still others have done that, but with the yield the North Koreans had attempted to reach (thought to be 4KT), or even with yields up to 50KT.  In most cases, these last are shown in conjunction with commentary to the effect of “now do you understand why this is a problem?”.

This is why I do what I do, whether it’s write books or publish articles or speak at conferences or build tools or just post entries here:  to help people learn more about their world, and to help them share what they know and think and believe with others.  Sometimes that’s worth saying again, if only to remind myself.


Hospitality

Published 18 years, 11 months past

Carolyn’s been eating a lot of ice cream and watching a lot of videos the past few days, and we’re sort of concerned that she’s going to get entirely too used to both.

This is all happening because on Thursday, she had her tonsils and adenoids surgically removed.  I imagine that it’s never easy for a parent to have a child go into an operating room, but it seems like there’s something extra difficult when it’s a little girl who’s not yet three.  I know that much younger children go into operating rooms every day; my sister underwent her first operation at the age of six months.  As I grew up, visiting hospitals became a regular feature of my life, and I have little fear of hospitals or doctors to this day.  Needles, yes.  Those terrify me.  But not hospitals.

It’s just as well, because last Tuesday, I ended up in the emergency room with a broken big toe.  This was the result of an unfortunate interaction between my foot and the island in our kitchen, and at first I didn’t even think it was serious.  There wasn’t much pain, no swelling or discoloration, and I could still move my toe just fine.  One of the lessons I learned as a child is, “If you can move it, then it must not be broken”.  Turns out that’s wildly incorrect.  It’s entirely possible to move a broken appendage and not even have it hurt that much.  At first.  Eventually, though, the toe stiffens up and it starts to hurt like there’s no tomorrow.

So I went on crutches two days before my daughter went in for surgery, less than a week after Kat came off crutches, which she’d been issued after breaking an ankle a few weeks back.  She’s still wearing an Aircast most of the time.  It’s been a laugh a minute in our house, let me tell you.  (Though I must admit I’m jealous of her Aircast.  It totally looks like a jet-boot from Star Trek, right down to having what look like little reaction boosters on the back.)

So now Kat and I are hobbling around, whereas Carolyn is just about back to normal.  In fact, she was running around laughing, singing, and playing pool within a few hours of the surgery.  We figured we’d have to go back to signing with her while her throat healed, but nope, no need.  The original plan was to keep her in the hospital overnight for observation, but about six hours after surgery, the doctor told us to go home.  They’d never seen anything like it, they said, and especially not in a child so young.  Sometimes I think she just might be a superhero-in-waiting, kind of like the invincible teenager on Heroes, most of which I watched on the emergency room’s TV while waiting to have my foot examined.

I suppose most every parent thinks their kid is super, but seriously, she’s an ironclad trooper.  In a weird way, I’m inordinately proud of her, which is kind of like being proud of her for having brown hair, but there it is anyway.  I fervently hope she rebounds just as powerfully and positively from all life’s injuries.

Anyway, given that she’s technically in recovery and we’d already planned for cold soft foods and lots of videos, we just went with the plan.  Now we’re all caught up on recent episodes of The Backyardigans and have been through most of her Signing Time videos (her choice!), and are starting to think about how to wean her back to one show every third day or so.  We’re currently hoping that going back to pre-school does the trick.  Wish us luck.


W3C Change: Your Turn!

Published 18 years, 11 months past

So recently, I shared a number of ideas for improving the W3C, the last of which (posted a week ago) was to transition from a member-funded organization to a fully independent foundation of sorts, one that was funded by the interest earned by an endowment fund.  Surprisingly, there seemed to be little objection to the idea.  That was the one thing that I figured would get some pushback, mainly due to the magnitude of the change involved.  I’m still interested in hearing any counter-arguments to that one, if somebody’s got ’em (thought they’d be best registered on that particular post, and not here).

The other thing I was expecting to see, but didn’t, was other people’s ideas for improvements to the W3C.  That was probably my fault, given the way I wrote the posts, which now that I look at them were set up more as soliloquies than the beginnings of a discussion.  While I think my ideas are good ones (of course!), I’m only one person, and I very much doubt I’ve thought of everything.

So what are your thoughts for improving the W3C’s effectiveness and standing in the field?


W3C Change: Full Independence

Published 18 years, 11 months past

Apologies for the break in posting just as I was getting to the best part of the W3C Change series, but back-to-back trips to Seattle and Dallas came up before I could finish writing up my thoughts.  This one was, for all the simplicity of the content, the hardest one to write, because I kept revising it to try to be more clear about what I’m proposing and how it would be an improvement.  I could keep revising ’til the end of forever, so I’m just going to take what I have now and go with it.

My third recommendation is simply this: Transform the W3C from a member-funded organization to a financially independent entity.

In order to accomplish this, the W3C would need to embark on a major capital campaign, similar to the efforts mounted by major non-profit organizations and American private universities.  The campaign parameters that come to mind are a ten-year campaign whose goal is to build an endowment of $200 million.  From the interest on this endowment—which at a relatively modest 5% return would be $10 million annually—the W3C could fund its activities.

(Note: I do not have access to the budget of the W3C, but with approximately 70 staff members at an average total cost of $125,000 per year in salary, benefits, and travel expenses, the staffing cost would be $8.75 million.  If I am lowballing the budget, then obviously the capital campaign’s goal would have to be raised.  The general approach remains the same.)

As the campaign progressed, the membership dues would be reduced across the board in proportion to the progress of the campaign.  Once the campaign reached its end and the full endowment had been acquired, the dues would fall to zero and the membership model would be dismantled.

You might wonder where the blinking font the W3C could get that kind of money, even over the course of a decade.  Well, 20 Internet billionaires could each donate $10 million in thanks for the W3C making their fortunes possible, and there you go.  Even if that doesn’t happen, there are many foundations whose goal is to foster better technology and communications, and who might be persuaded to contribute.  Government grants could help.  And, of course, a supporter campaign like that run by the EFF would allow individual developers to add their support.

Frankly, I don’t think the problem would be finding the money, especially over a ten-year period.  By hiring an experienced fund-raiser, I think the funds could be raised a good deal more quickly.  I think this would be especially true if Sir Tim publicly put his weight behind the effort, and made personal appeals to potential major donors.

But why would I even suggest such a thing?

  1. The current membership model creates an apparent wall between the W3C and the rest of us.  Because it costs a minimum of $15,000 over three years to become a W3C Member, individuals will rarely, if ever, be able to justify membership.  The same is true of web design and development shops.

    For primarily this reason, there is the belief that non-paying members of the community cannot join Working Groups, and that the WGs are forever closed to the rest of the world.  This is not really true, since any Working Group can ask people in the community to become Invited Experts.  These are Working Group members who don’t have to pay to get in, and aren’t necessarily held to the same contribution standards as Member representatives.  (Not that contribution standards are always upheld for them either, as I observed in an earlier post.)

    So now imagine a W3C where there are no Members.  That means that every Working Group is comprised entirely of Invited Experts (except for any W3C staff members who might join).  This bridges the perceived gap, and puts community members on a more equal footing with those who would currently be Member representatives.  I’m not saying there wouldn’t be company representatives at all.  The CSS WG is going to have representatives from Microsoft, Mozilla, Apple, and so on.  The alternative is for them to not participate, and thus be at the mercy of what happens in their absence.

    Since someone’s going to bring it up, I’ll address the Microsoft question.  You might think that Microsoft could decide to both abandon, say, the CSS WG and ignore what it produces.  (Anyone could do this, but Microsoft is going to be the company accused of hypothetically plotting such a thing.)  That could well be.  But wouldn’t Microsoft departing the CSS WG be a large red flag that something’s seriously wrong, and that it needs to be addressed before worrying about exactly how the layout module is written?

    Of course, some other player could do this as easily as Microsoft.  The point is really that, if a major player in the space with which the WG is concerned departs that WG, then it identifies a situation that needs to be addressed.  The Member model actually goes some small way toward concealing that, because the dues paid create a certain impetus to put someone on a WG, even if there’s no serious interest.

    The flip side of this is the question, which I’ve heard more than once from people when I talk about this idea, “How would a WG force the players to the table?”  For example, how could a new browser-technology WG force the browser makers to join the group?

    The question itself betrays a fallacious assumption: that players should be forced to work together.  If you propose to form a WG that doesn’t interest one or more of the major players in the field, then the WG may well be flawed from the start.  The point of a WG is to produce an interoperable standard.  If a WG just goes off and does something without buy-in from players, and there’s never an implementation, then the whole effort was wasted.  On the other hand, a specification that was produced with the involvement of all the major players stands a much better chance of being implemented, and thus a much better chance of being used and appreciated by the community.

    The flip side of that flip side is the question, “What if a WG refuses to admit a player in the field?”  In other words, what if the CSS WG barred Microsoft from having a representative on the WG?  Again, that would be an enormous red flag that something had gone awry.  Any WG that refused to involve an important player in their field would need to be scrutinized, and probably reformatted.

    All this does raise the spectre of replacing a centralized model with a consensus model.  Which is just fine with me, for all the reasons I just mentioned.

  2. There is the perception—largely untrue, but no less persistent—that the W3C is controlled by those who fund it.

    It’s actually been my experience that there’s an inverse correlation between the amount of money a company puts into the W3C and the frequency with which their representatives get their way.  During my time in the CSS WG, the Microsoft people faced more resistance and more grief from the rest of the WG than the Netscape reps ever dreamed of getting.  CSS-like things which IE/Win had done faced a serious uphill battle to be incorporated in the specification, even when they were good ideas.  I don’t know how to explain this odd variance from the usual effect of money, but it was there.  Maybe in other WGs, the situation is different, although I kind of doubt it.

    But as I say, the perception is persistent.  A financially independent W3C would remove that perception.  I wouldn’t propose this kind of funding-model change solely to clear up some erroneous perceptions, but it’s an undeniably positive side effect.

  3. Full financial independence allows the W3C to do things that its dues-paying Members likely wouldn’t permit.

    Now what could I be talking about, since I just claimed that dues money doesn’t drive what the W3C does, except in inverse ways?  What I’m talking about is things like launching a program to pay Invited Experts a small stipend.  Currently, Invited Experts receive no financial support, whereas Member representatives are supported by their employers while devoting some of their time to the W3C.  I tried to imagine a world where the dues-paying Members of the W3C approved the idea of paying Experts, and although I managed to do so, it turned out to be entirely populated by talking kawaii unicorns who get joyfully teary about their perpetually rainbow-filled skies and giggle a lot.

    Here’s another W3C effort which probably could never get funded under the current model:  a university scholarship for students who plan to study the web, or uses of the web.  They might fund independent research on the effects of the web in developing countries, or what users want, or any number of other things.  Or hey, how about putting enough money into the WWW conference series that people who present papers are given a complimentary registration?  (I know—radical!)

    These things couldn’t happen if the W3C’s endowment generated only enough interest to cover staffing and overhead, but the endowment doesn’t have to be limited to just that much.  A second capital campaign, or a simple continuation of the first one, could increase the endowment, thus giving the W3C (potentially) quite a bit of discretionary funding.  It would give them the opportunity to spend money on efforts that advance their core mission (“To lead the World Wide Web to its full potential by developing protocols and guidelines that ensure long-term growth for the Web”).

There are various knock-on effects that arise from those points, of course, but I’ve gone on long enough.

As many of you have noticed, I’m effectively proposing that the W3C become a foundation instead of a consortium, albeit a foundation whose primary mission is to act as a consortium would.  I’ve avoided using terms like “non-profit” and “not-for-profit” because they might imply specific things which I don’t fully intend in terms of tax law, or whatever, but I do think of it as a generically non-profit institution; that is, one that does not strive to create a profit, except as can be invested into the endowment.

I’ve tried to explain why I believe this is a good idea, but in the end, I think the most fundamental reason is that one I can’t explain:  it just feels like the right thing to do.  It’s like I can perceive a shape without grasping all its details, but the overall shape looks right, looks better.

I fully expect that some will recoil from this idea, convinced that a foundation is a poor substitute for a consortium.  Obviously, I disagree.  I think the W3C’s future could be made much more stable with this approach, especially in financial terms.  I also believe, as I said before, that it would be no less of a force for the advancement of the web.  In fact, I think it would be a much stronger force, and have a greater positive effect, over the long term.

It is not a small undertaking, but it is an important and worthwhile effort, and I hope it is one the W3C considers seriously.


W3C Change: Working Groups

Published 18 years, 11 months past

The second area where I think the W3C could be improved is in how Working Groups are populated and managed.  To a large extent, what I propose is just a re-commitment to existing rules, and isn’t particularly radical.  That doesn’t make them any less important, of course.  Furthermore, this area of discussion doesn’t boil down to one talking point; rather, it boils down to three.

First is this: participants in a Working Group should be productive, or else leave the group, whether voluntarily or otherwise.

This is really already part of the rules, but it’s not very well enforced, in my experience.  I mean that personally, too: between mid-2003 and mid-2004, I contributed almost nothing to the CSS WG.  I didn’t even phone in for teleconferences, let alone contribute to specifications.  Now, as an Invited Expert, the participation rules aren’t quite the same for me as they are for Member representatives, but by any measure, I was deadweight.  I was only on the WG membership list out of inertia.

When the WG’s charter came up for renewal in 2004, the chair asked me if I wanted to stay in the group and start contributing again.  After some reflection, I said no, because I wasn’t going to magically have more time and energy to give to the WG.  To stay would have been dishonest at best, so I left.

Honestly, though, he should have asked me the same question (and been a little more pointed about it) six months previously.  WG chairs should do the same for any member who falls silent.  The actual reasons for the silence don’t matter, because having a WG member step down isn’t a permanent excommunication.  It’s simply an acknowledgment that the person is too busy to be a contributing member, and so leaves the group, whether temporarily or for good.

Ideally, people would voluntarily do this upon recognizing their lack of participation, but not everyone would.  I didn’t, until I was prompted.  WG chairs should prompt when necessary, and even be empowered to place someone on inactive status if they don’t contribute but refuse to step down.  Again, this isn’t a permanent decision, and it isn’t punishment.  It’s just keeping the WG membership list aligned with those who are actually contributing.

This brings me to the second point, related very closely to the first: Working Groups should have a minimum membership requirement.

If a WG doesn’t have enough members to operate, then it needs to be mothballed.  Simple as that.  If you had ten WG members and eight of them went silent, leaving you with only two active members, then it’s time to close up shop for a while.  No WG would ever be permanently shuttered this way:  it would simply be placed on “inactive” status.  Once enough people committed to being contributing WG members, it could be re-activated.  Granted, this would require a re-chartering and all the other things necessary during that process.

I also have to figure that if a WG was in danger of going inactive, some of the group’s members would get involved again.  If not, word would spread and community members would step up to offer their help.  And if none of that happened, then it would be a pretty strong indication that the WG did need to be shut down, for general lack of interest.

Of course, all this requires a WG chair who is willing to hold people’s feet to the fire, to cut inactive members, and to shut down his own WG if there aren’t enough active participants.  But then WG chairs are already required to do a lot of things, and not all of them get done.  Some are trivial; some are not.

The biggest obstacle a WG can face is its own chair, if said chair is abrasive or obstructionist or just plain out of touch.  As things stand, the only way to lodge a complaint against a chair is by working your way up the chain of command at the W3C.  That’s a pretty flat set of rather short chains, though.  In many cases, it doesn’t take a whole lot of steps to reach Sir Tim himself.  And there are even cases where WG chairs are their own bosses, hierarchically speaking, which makes it hard to effectively lodge complaints.

Thus we come to my third suggestion: there needs to be a “vote of no confidence” mechanism for WG chairs.

This is nothing more than a vote by the members of a Working Group:  do we keep our chair, or should he step down?  In this way, the WG itself can decide when it’s time for a leader to go.  I get a little wobbly over the actual vote threshold: should a chair be removed if half the WG votes against him, or two-thirds?  Tough call.  Probably a majority, on the theory that any WG with that many people opposed to the chair is already in deep trouble.

I’m also unable to decide whether I’d have these votes happen automatically, on a set schedule—say, every year right before the March Technical Plenary—or only when a member of the WG calls for one.  Both approaches have pros and cons.  I think my slight preference is for the set schedule, but on the other hand, requiring a member of the WG to call for a “no confidence” vote would be useful, in that the mere call for a vote would serve as its own indication of trouble in a WG, regardless of the vote’s outcome.

So that’s how I’d reform WG membership and leadership:  participants need to be active; WGs need a minimum membership to continue; and WGs should be able to remove their own chairs when necessary.


Browse the Archive

Earlier Entries

Later Entries