Posts in the Tech Category

Events and A Day, Belatedly

Published 15 years, 7 months past

I’m a bad conference organizer.

Why?  Because we opened the An Event Apart 2010 schedule for sales back in, um, flippin’ November, and I never mentioned it here.  Cripes, I never even posted when we announced the lineup of cities.  I could go through the great big long sob-story list of reasons why 2009 was really tough and blah blah blah, but when you get right down to it, I fell down on my job.

Okay.  So.  Time to correct that.

(deep breath)

Hey everyone, check it out: the complete tour schedule for An Event Apart 2010!  Woohoooo!

  1. Seattle: April 5-7, 2010 (yes, three days; more on that anon)
  2. Boston: May 24-25, 2010
  3. Minneapolis: July 26-27, 2010
  4. Washington, DC: September 16-17, 2010
  5. San Diego: November 1-2, 2010

We’ve got a pretty killer lineup, if I do say so myself.  You can get the mostly-complete list from our opening-of-sales announcement last November.  It lists the people we had confirmed at the time; there have been a few additions since then.  Check out your city of choice to see who’s going to be there!  (But always remember that speaker lineups are subject to change: speakers are people too, and life has a way of interfering with schedules.  I myself had to withdraw from An Event Apart Boston last year due to a family emergency.)

The price to register for these two-day, one-track Events is the same as it was in 2009, and there are educational and group discounts available for those who are interested.

But wait, I just said “two-day” when the first show of the year is clearly three days.  What gives?

Seattle is the site of our first-ever A Day Apart, a full-day workshop that can be attended on its own or as part of a full three days of Event Apart ecstasy.  And the inaugural Day Apart will be nothing less than a detailed plunge into HTML 5 and CSS3 with Jeremy Keith and Dan Cederholm.  Jeremy handles the markup; Dan gets stylish.  It’s going to be fantastic.  I’m going to be in the back of the room for the whole day, soaking up as much as I can.

If you want to attend just the workshop, it’s $399 for the whole day if you buy an early bird ticket (available through March 5th).  The price goes up $50 when early bird ends, and another $100 if you show up at the door.  But I wouldn’t recommend that last, because I don’t think there will be any tickets available at the door.  Again: if you show up unannounced on the day of the workshop and ask to buy a ticket, we will most likely have to turn you away, because I expect that there won’t be any seats available.

On the other hand, maybe you’d like to experience more than just one day of AEA goodness.  Maybe you’d like to go whole hog and attend both the two-day Event Apart and the subsequent Day Apart, soaking up all the knowledge and enthusiasm and camaraderie that typifies An Event Apart.  And who could blame you?  If you do that, then the total early bird price for all three days is $1,190, whereas buying the event and workshop passes separately would total $1,294.  That’s right: you actually get slightly more than $100 off the cost of the workshop if you attend all three days, over and above the early bird discount.  (Or you can think of it as getting $100+ off the cost of the conference.  We’re not fussy.)

As it happens, these three-day passes have proved quite popular.  So if you want to get your hands on one of those — or on any Seattle tickets, whether one, two, or three days — I wouldn’t wait too long.  Our internal analyses suggest that there will come a time, some time before the doors open on April 5th, that the ability to buy a ticket will cease to be.  It may even pine for a fjord or two.

As for the four shows that come after Seattle, well, they’re looking pretty popular too.

I know I say this every year, but I’m really excited about what we’ve got planned for the year.  Jeffrey and I constantly and (we hope) consistently strive to create an event that we ourselves want to attend, and that’s absolutely true of the shows and workshop we have planned in 2010.  I can’t wait to hear what the speakers and attendees have to share.  Hope to see you there!


MIX Judging

Published 15 years, 7 months past

I was recently honored to be asked to be a judge for the MIX 10k Smart Coding Challenge, running in conjunction with Microsoft’s MIX conference.  The idea is to create a really great web application that totals no more than 10KB in its unzipped state.

Why did I agree to participate?  As much as I’d like to say “fat sacks of cash“, that wasn’t it at all.  (Mostly due to the distinct lack of cash, sacked or otherwise.  Sad face.)  The contest’s entry requirements actually say it for me.  In excerpted form:

  • The entry MUST use one or more of the following technologies: Silverlight, Gestalt or HTML5…
  • The entry MUST function in 3 or more of the following browsers: Internet Explorer, Firefox, Safari, Opera, or Chrome…
  • The entry MAY use any of the following additional technology components…
    • CSS
    • JavaScript
    • XAML/XML
    • Ruby
    • Python
    • Text, Zip and Image files (e.g. png, jpg or gif)

Dig that:  not only is the contest open to HTML 5 submissions, but it has to be cross-browser compatible.  Okay, technically it only has to be three-out-of-five compatible, but still, that’s a great contest requirement.  Also note that while IE is one of the five, it is not a required one of the five.

I imagine there will be a fair number of Silverlight and Gestalt entries, and I might look at them, but I’m really there — was really asked — because of the HTML 5 entries.  By which I mean the open web entries, since any HTML 5 entry is also going to use CSS, JavaScript, and so on.

The downside here is that the contest ends in just one week, at 3pm U.S. Pacific time on 29 January.  I know that time is tight, but if you’ve got a cool HTML 5-based application running around in your head, this just might be the time to let it out.


Correcting Corrupted Characters

Published 15 years, 9 months past

At some point, for some reason I cannot quite fathom, a WordPress or PHP or mySQL or some other upgrade took all of my WordPress database’s UTF-8 and translated it to (I believe) ISO-8859-1 and then dumped the result back right back into the database.  So “Emil Björklund” became “Emil Björklund”(If those looked the same to you, then I see “Börklund” for the second one, and you should tell me which browser and OS you’re using in the comments.)  This happened all throughout the WordPress database, including to commonly-used characters like ‘smart’ quotes, both single and double; em and en dashes; ellipses; and so on.  It also apparently happened in all the DB fields, so not only were posts and comments affected, but commenters’ names as well (for example).

And I’m pretty sure this isn’t just a case of the correct characters lurking in the DB and being downsampled on their way to me, as I have WordPress configured to use UTF-8, the site’s head contains a meta that declares UTF-8, and a peek at the HTTP response headers shows that I’m serving UTF-8.  Of course, I’m not really expert at this, so it’s possible that I’ve misunderstood or misinterpreted, well, just about anything.  To be honest, I find it deeply objectionable that this kind of stuff is still a problem here on the eve of 2010, and in general, enduring the effluvia of erroneous encoding makes my temples throb in a distinctly unhealthy fashion.

Anyway.  Moving on.

I found a search-and-replace plugin—ironically enough, one written by a person whose name contains a character that would currently be corrupted in my database—that lets me fix the errors I know about, one at a time.  But it’s a sure bet there are going to be tons of these things littered all over the place and I’m not likely to find them all, let alone be able to fix them all by hand, one find-and-replace at a time.

What I need is a WordPress plugin or something that will find the erroneous character strings in various fields and turn them back into good old UTF-8.  Failing that, I need a good table that shows the ISO-8859-1 equivalents of as many UTF-8 characters as possible, or else a way to generate that table for myself.  With that table in hand, I at least have a chance of writing a plugin to go through and undo the mess.  I might even have it monitor the DB to see if it happens again, and give me a big “Clean up!” button if it does.

So: anyone got some pointers they could share, information that might help, even code that might make the whole thing go away?


Pseudo-Phantoms

Published 15 years, 10 months past

In the course of a recent debugging session, I discovered a limitation of web inspectors (Firebug, Dragonfly, Safari’s Web Inspector, et al.) that I hadn’t quite grasped before: they don’t show pseudo-elements and they’re not so great with pseudo-classes.  There’s one semi-exception to this rule, which is Internet Explorer 8’s built-in Developer Tool.  It shows pseudo-elements just fine.

Here’s an example of what I’m talking about:

p::after {content: " -\2761-"; font-size: smaller;}

Drop that style into any document that has paragraphs.  Load it up in your favorite development browser.  Now inspect a paragraph.  You will not see the generated content in the DOM view, and you won’t see the pseudo-element rule in the Styles tab (except in IE, where you get the latter, though not the former).

The problem isn’t that I used an escaped Unicode reference; take that out and you’ll still see the same results, as on the test page I threw together.  It isn’t the double-colon syntax, either, which all modern browsers handle just fine; and anyway, I can take it back to a single colon and still see the same results.  ::first-letter, ::first-line, ::before, and ::after are all basically invisible in most inspectors.

This can be a problem when developing, especially in cases such as having a forgotten, runaway generated-content clearfix making hash of the layout.  No matter how many times you inspect the elements that are behaving strangely, you aren’t going to see anything in the inspector that tells you why the weirdness is happening.

The same is largely true for dynamic pseudo-classes.  If you style all five link states, only two will show up in most inspectors—either :link or :visited, depending on whether you’ve visited the link’s target; and :focus.  (You can sometimes also get :hover in Dragonfly, though I’ve not been able to do so reliably.  IE8’s Developer Tool always shows a:link even when the link is visited, and doesn’t appear to show any other link states.  …yes, this is getting complicated.)

The more static pseudo-classes, like :first-child, do show up pretty well across the board (except in IE, which doesn’t support all the advanced static pseudo-classes; e.g., :last-child).

I can appreciate that inspectors face an interesting challenge here.  Pseudo-elements are just that, and aren’t part of the actual structure.  And yet Internet Explorer’s Developer Tool manages to find those rules and display them without any fuss, even if it doesn’t show generated content in its DOM view.  Some inspectors do better than others with dynamic pseudo-classes, but the fact remains that you basically can’t see some of them even though they will potentially apply to the inspected link at some point.

I’d be very interested to know what inspector teams encountered in trying to solve this problem, or if they’ve never tried.  I’d be especially interested to know why IE shows pseudo-elements when the others don’t—is it made simple by their rendering engine’s internals, or did someone on the Developer Tool team go to the extra effort of special-casing those rules?

For me, however, the overriding question is this: what will it take for the various inspectors to behave more like IE’s does, and show pseudo-element and pseudo-class rules that are associated with the element currently being inspected?  And as a bonus, to get them to show in the DOM view where the pseudo-elements actually live, so to speak?

(Addendum: when I talk about IE and the Developer Tool in this post, I mean the tool built into IE8.  I did not test the Developer Toolbar that was available for IE6 and IE7.  Thanks to Jeff L for pointing out the need to be clear about that.)


HTML5 And You

Published 16 years, 6 days past

I mentioned in my previous post that I “had come away with my head reeling from the massive length and depth of the often-changing specification”, which is entirely true.  Printouts of the current draft of the HTML5 spec can reach, depending on your operating system and installed fonts, somewhere north of 900 pages.  Yes: nine hundred.  There are unabridged Stephen King novels that run shorter.

You might well say to yourself: “Self, is it just me, or are the people doing this completely off their everlovin’ rockers?  Because the specification for something as fundamentally simple as HTML should reach maybe 200 pages, max.”  You might even despair that the entire enterprise is doomed to failure precisely because nobody sane will ever sit down to read that entire doorstop.

But there’s no real reason to panic, because here’s the thing about the HTML5 specification that might not be obvious right away:  it’s not for you.  It’s for implementors.  And that’s a good thing.

If you do start reading the HTML5 draft, you’ll start running into really lengthy, excruciatingly detailed algorithms for, say, parsing a time component.  Or moving through the browser’s history.  Or submitting a form.  There’s an entire (long) chapter on how to process the HTML syntax.

Those are all good things, actually.  They greatly increase the chances of interoperability actually happening within our lifetimes.  There’s no guessing about, well, much of anything.  It’s all been exactingly defined, to the extent that one can exactingly define anything using a human language.  A browser team doesn’t have to wonder, or even guess, what to do when the document has been completely parsed.  It’s all spelled out.  And the people on those browser teams will, in the end, be the people who read that entire doorstop.  (Their sanity is another matter, and not discussed here.)

How is all that stuff relevant to you, the author?  In the sense that when browser teams follow the spec, their products will be interoperable, which is to say consistent.  (Just imagine that for a moment.)

Beyond that, though, the detailed implementation stuff isn’t relevant to you.  You are not expected to know all those algorithms in order to write HTML documents.  Pretty much all you need to know is the markup.  That’s the part that should be no more than 200 pages, yeah?

Turns out it is, and by a comfortable margin.  Michael(tm) Smith’s HTML5: The Markup Language is a version of the HTML5 draft with all of those eye-wateringly pedantic implementor sections stripped out, and when I generated a PDF it came in at 147 pages.  That’s what you really need in order to get up to speed on what’s in HTML5.  It’s for you.


Nine Into Five

Published 16 years, 1 week past

Like so many others, I had tried to dig into the meat of HTML5 and figure out just what the heck was going on.  Like so many others, I had come away with my head reeling from the massive length and depth of the often-changing specification, unsure of the real meaning of much of what I had read.  And like so many others, I had gone to read the commentary surrounding HTML5 and come away deeply dispirited by the confusion, cross-claims, and rancor I found.

Then I received an invitation to join a small, in-person gathering of like-minded people, many of them just as confused and dispirited as I, to turn our collective focus to the situation and see what we found.  I already had plans for the meeting’s scheduled dates.  I altered the plans.

Over two long days, we poked and prodded and pounded on the HTML5 specification—doing our best to figure out what was meant by, and what would result from, this phrase or that example; trying to reconcile seemingly arbitrary design choices with what we knew of the web and its history and the stated goals of the HTML5 specification; puzzling over the implications of example code and detailed algorithms and non-normative notes.

In the end, we came away with a better understanding of what’s going on, and out of that arose some concerns and suggestions.  But in the main, we felt much better about what’s going on in HTML5, and have now said so publicly.

Personally, there are two markup changes I’d like most to see:

  1. The content model of footer should match that of header. As others have said, the English-language name of the footer element creates expectations about what it is and how it should work.  As the spec now stands, most of those expectations will be wrong.  To wit: if your page’s footer includes navigation links, and especially if you have an HTML5-structured “fat footer“, you can’t use footer to contain it.

    If this feels a little familiar, it should: the same problem happened with address, which was specified to mean only the contact information for the author of a page.  It was quite explicitly specified to not accept mailing addresses.  Of course, tons of people did just that, because they had an address and there was an address element, so of course they went together!

    A lot of us cringed every time this came up in the last ten years of conducting training, because it meant we’d have to spend a few minutes explaining that the meaning of the element’s name clashed with its technical design.  We saw a lot of furrowed brows, rolled eyes, and derisively shaken heads.  That will be magnified a millionfold with footer if things are allowed to stand as they are.

    As I said, the fix is simple: just change the content model of footer to state:

    Flow content, but with no header or footer element descendants.

    That’s exactly the same content model as header, and for the same reasons.

  2. time needs to be less restrictive.  That’s not very precise, I know.  But as things stand now, you can only apply time to Gregorian datetimes, and you’re not supposed to use it for anything that couldn’t be easily represented in a calendaring program.  The HTML5 specification says:

    The time element is not intended for encoding times for which a precise date or time cannot be established.

    That makes me wonder, in a manner not at all like Robert Plant, how precise do we have to be?  The answer, I’m sorry to say, is too much.

    To pick an example: I have what I think of as a great use case for the time element, and while it uses the Gregorian calendar, it’s only accurate to whole months (as is Wikipedia’s version).  In some cases I could get the values down to specific days; but in others, maybe not.  So I can’t use the datetime attribute, which requires at least year-month-day, if not actual hours and minutes.  I could omit the attribute, and just have this:

    <time>October 2007</time>
    

    In that case, the content has to be a valid date string in content—which is to say, a valid date string with optional whitespace.  So that won’t work.

    I’ve pondered how best to tackle this, as did the Super Friends.  Our suggestion is to allow bare year and month-day values as permitted in ISO8601.  In addition, I think we should allow a valid date string to only require a year, with month, day, and time optional.  That seems good enough as long as we’re going to go with the idea that the Gregorian calendar contains all the time we ever want to structure.

    But what about other, older dates, some of which are fairly precisely known within their own calendars?  On that point, though the historian in me clamors for a fix, I’m uncertain as to what.  PPK, on the other hand, has put alot of thought into this and written a piece that I have skimmed but never, perhaps ironically, found the time to read in its entirety.

These are not my only concerns, but they’re the big ones.  For the rest, I concur with the hiccups guide, though of course to varying degrees.  I’m still trying to decide how much I care (or don’t) about the subtle differences between article and section, for example, or the way aside fits (or doesn’t) with its cousin elements.  And dialog just bugs me, but I’m not sure I have a better proposal, so I’ll leave it be for the time being.

At the other end of the two days, I felt a good deal more calm and hopeful than I did going in.  As Jeffrey said, “the more I study the direction HTML5 is taking, the better I like it”.  While there are still rough edges to be smoothed, there is time to smooth them.  We’ve already seen responsiveness on some of the points we addressed in the hiccups guide, and discussions around others.  The specification itself is daunting, especially to those who might remember the compact simplicity of the HTML2 spec.  Fortunately, it has good internal cross-linking so that you can, with effort, track down exactly what’s meant by “valid date string with optional time” or “sectioning content” or “formatBlock candidate“.

With HTML5, the web is not ending, nor is it starting over.  It’s evolving, slowly and in full view of the public, with an opportunity for anyone to have their say (which is not, of course, the same as having one’s proposals accepted).  It’s the next step, and I feel quite a bit more confident that it’s a step onto solid ground.


Announcing Followerlap

Published 16 years, 2 months past

Last week, I got an interesting inquiry from Velda Christensen:

@meyerweb *wondering just how many of your followers follow @zeldman and vice-versa*

I had no idea.  Furthermore, I didn’t know of a tool that could tell me.  So I built one: Followerlap.

As it turned out, the Twitter API made answering the specific question pretty ridiculously easy, thanks to followers/ids.  All it takes is two API requests, one for each username.  The same would be true of friends/ids, on top of which I suspect I’ll fairly shortly build a tool quite similar to Followerlap.

Since I announced Followerlap’s existence on (where else?) Twitter, I’ve had a few repeated (and not unexpected) bits of feedback.

  • Why not list the common followers?  Because followers/ids returns a list of numeric IDs.  Resolving those IDs as usernames would require one API hit per ID.  If there are 15 followers in common, that’s not such a big deal, but if there are 1,500, well, I’ll run out of my hourly allotment of API requests very quickly.  Maybe there’s a better way; if so, I’d love to hear about it, because that would be a great addition.

  • Why can’t I find out how many people follow both Stephen Fry and Shaquille O’Neal?  Past a certain number of followers, somewhere in the 200,000–250,000 range, the API just dies.  You can’t even count on getting a consistent error message back.  There are ways around this, but I didn’t want to stress the API that way, so it just fails.  Sorry.

  • How can I link to a specific comparison?  At the moment, you can’t.  I hope to make that happen soon, but I decided that a tool this simple should have a similarly simple launch.  Ship early, ship often, right?  Anyway, it’s on the list of things to add soon.  Use the new “Livelink to this result” link underneath a result.  (See update below for more.)

So that’s Followerlap.  Any other questions?  I’ll do my best to answer them in the comments, though for a number of reasons I may be slow to respond.

Update 6 Jul 09: as noted in the edited point above, livelinking of comparison results is now, um, live.  So now you can pass around results like the number of people who follow both God and the Devil (thanks to Paul M. Watson for coming up with that one!).  I call this “livelinking” because hitting a result URL will get you the very latest results for that particular comparison.  “Permalinking” to me implied that it would link to a specific result at a specific time, which the tool doesn’t do and very likely never, ever will.


Digging in the Mud

Published 16 years, 5 months past

There’s something about the Diggbroglio that has left me scratching my head:  how is it that so many people are up in arms about the DiggBar when they’ve had nothing to say about the framing bars of StumbleUpon, FaceBook, etc. etc.?

Now, please note that I’m not saying the DiggBar, or any other framing bar, is cool and we should all love it.  I’m not.  I absolutely, completely, totally get all the reasons why framing bars are bad for breaking bookmarking and navigating and search engines and copyright and hijacking content and so on.  But that’s precisely why I’m so confused, because we’ve known for years now that framing bars are bad mojo—and yet StumbleUpon, for example, is based on bars.  There is a browser extension/plugin StumbleUpon thingy you can install, but there’s also a web-based framing bar thing (see this link, for example) that they offer, and I bet people use.  You don’t have to be a member to use it: I hit that link in a browser that allows cross-site frame loading and I get the bar and the page it’s framing, and I’ve never been a StumbleUpon member.  The source shows it’s using iframes to make it happen.  So far as I can tell, it’s not really different from the DiggBar.

So why do we have people writing JavaScript and PHP and Ged-knows-what-else that specifically busts out of the DiggBar framing, instead of busting out of all framing?  After all, site framing is universally agreed to be objectionable; even yet-to-be-discovered life forms orbiting distant stars think it’s a bad idea.  So why is one instance of it being targeted while the rest are tolerated?  Why are we all not just using if (top != self) {top.location.replace(self.location.href);} and other-language equivalents?  Yes, I know, some of you do just that, but why isn’t everyone?

Perhaps because I have yet to eradicate a stubborn streak of faith in the rationality of my peers, I assume that there’s some technical difference here that I’m missing and that, once understood, would let me understand the source of the outcry.  So can someone please explain to me, or point at an explanation that states, the technical ways in which the DiggBar is worse enough than already-extant framing bars that it’s triggered this outrage?  Again, nobody has to enumerate the complete list of the DiggBar’s sins; I understand.  A list of any different and more egregious sins would be just fine, though.

Also, if anyone comes up with a way to bust out of the frames while still preserving the bar—that is, correcting the problems framing bars cause while preserving their functionality for the people who want to use them—that would be extra-cool.  After all, people who use those services like the bars.  If we could let them browse the web the way they prefer while fixing bookmark/SEO/etc. problems framing bars can cause, that would be a win all the way around.

Update 14 Apr 09: looks like Porter‘s trying to keep the bar without the framing.

Update 16 Apr 09: in his post about Digg changing the way the DiggBar will work, John Gruber lists (by way of quoting Digg VP John Quinn’s post about it) the two things that made the DiggBar extra-objectionable (at least in his eyes).  Thanks, John!


Browse the Archive

Earlier Entries

Later Entries