Posts in the Tech Category

An Event Apart and HTML 5

Published 16 years, 1 month past

The new Gregorian year has brought a striking new Big Z design to An Event Apart, along with the detailed schedule for our first show and the opening of registration for all four shows of the year.  Jeffrey has written a bit about the thinking that went into the design already, and I expect more to come.  If you want all the juicy details, he’ll be talking about it at AEA, as a glance at the top of the Seattle schedule will tell you.  And right after that?  An hour of me talking about coding the design he created.

One of the things I’ll be talking about is the choice of markup language for the site, which ended up being HTML 5.  In the beginning, I chose HTML 5 because I wanted to do something like this:

<li>
<a href="/2009/seattle/">
<h2><img src="/i/09/city-seattle.jpg" alt="Seattle" /></h2>
<h3>May 4—5, 2009</h3>
<p>Bell Harbor International Conference Center</p>
</a>
</li>

Yes, that’s legal in HTML 5, thanks to the work done by Bruce Lawson in response to my href-anywhere agitation.  It isn’t what I’d consider ideal, structurally, but it’s close.  It sure beats having to make the content of every element its own hyperlink, each one pointing at the exact same destination:

<li>
<h2><a href="/2009/seattle/"><img src="/i/09/city-seattle.jpg" alt="Seattle" /></a></h2>
<h3><a href="/2009/seattle/">May 4—5, 2009</a></h3>
<p><a href="/2009/seattle/">Bell Harbor International Conference Center</a></p>
</li>

I mean, that’s just dumb.  Ideally, I could drop an href on the li instead of having to wrap an a element around the content, but baby steps.  Baby steps.

So as Bruce discovered, pretty much all browsers will already let you wrap a elements around other stuff, so it got added to HTML 5.  And when I tried it, it worked, clickably speaking.  That is, all the elements I wrapped became part of one big hyperlink, which is what I wanted.

What I didn’t want, though, was the randomized layout weirdness that resulted once I started styling the descendants of the link.  Sometimes everything would lay out properly, and other times the bits and pieces were all over the place.  I could (randomly) flip back and forth between the two just by repeatedly hitting reload.  I thought maybe it was the heading elements that were causing problems, so I converted them all to classed paragraphs.  Nope, same problems.  So I converted them all to classed spans and that solved the problem.  The layout became steady and stable.

I was happy to get the layout problems sorted out, obviously.  Only, at that point, I wasn’t doing anything that required HTML 5.  Wrapping classed spans in links in the place of other, more semantic elements?  Yeah, that’s original.  It’s just as original as the coding pattern of “slowly leaching away the document’s semantics in order to make it, at long last and after much swearing, consistently render as intended”.  I’m sure one or two of you know what that’s like.

As a result, I could have gone back to XHTML 1.1 or even HTML 4.01 without incident.  In fact, I almost did, but in the end I decided to stick with HTML 5.  There were two main reasons.

  1. First, AEA is all about the current state and near future of web design and development.  HTML 5 is already here and in use, and its use will grow over time.  We try to have the site embody the conference itself as much as possible, so using HTML 5 made some sense.

  2. I wanted to try HTML 5 out for myself under field conditions, to get a sense of how similar or dissimilar it is to what’s gone before.  Turns out the answers are “very much so” to the former and “frustratingly so” to the latter, assuming you’re familiar with XHTML.  The major rules are pretty much all the same: mind your trailing slashes on empty elements, that kind of thing.  But you know what the funniest thing about HTML 5 is?  It’s the little differences.  Like not permitting a value attribute on an image submit.  That one came as a rather large surprise, and as a result our subscribe page is XHTML 1.0 Transitional instead of HTML 5.  (Figuring out how to work around this in HTML 5 is on my post-launch list of things to do.)

    Oh, and we’re back to being case-insensitive.  <P Class="note"> is just as valid as <p class="note">.  Having already fought the Casing Wars once, this got a fractional shrug from me, but some people will probably be all excited that they can uppercase their element names again.  I know I would’ve been, oh, six or seven years ago.

    Incidentally, I used validator.nu to check my work.  It seemed the most up to date, but there’s no guarantee it’s perfectly accurate.  Ged knows every other validator I’ve ever used has eventually been shown to be inaccurate in one or more ways.

I get the distinct impression that use of HTML 5 is going to cause equal parts of comfort (for the familiar parts) and eye-watering rage (for the apparently idiotic differences).  Thus it would seem the HTML 5 Working Group is succeeding quite nicely at capturing the current state of browser behavior.  Yay, I guess?

And then there was the part where I got really grumpy about not being able to nest a hyperlink element inside another hyperlink element… but that, like so many things referenced in this post, is a story for another day.


MW Latest Tweet 1.1b2

Published 16 years, 2 months past

Now available: MW Latest Tweet 1.1b2.  The only real difference between this version and the previous is better auto-link routines, thanks largely to a PHP4-ified version of Joseph Scott‘s recently released MakeItLink PHP class.  I tightened up some related code as well, thanks to my newfound understanding of just what the heck a “callback function” actually does, and how it can be useful.  And anonymous functions, too!

Also, there is an “enter debug mode” link at the bottom of the administrative panel for the plugin.  It’s very cleverly matched with an “exit debug mode” link when you’re in debug mode.  These links do just what they sound like they should do.  Debug mode itself, introduced in the previous beta, is unchanged (except maybe cosmetically).

In case anyone’s interested in seeing how I use the text-replacement strings on  meyerweb, here’s what I have in that textarea, formatted slightly for readability:


<div class="panel">
<h4>Recently Tweeted</h4>
<p class="more">
<a href="http://twitter.com/%%USER_SCREEN_NAME%%">see more</a>
</p>
<p>
%%TEXT%% <small>–tweeted %%CREATED_AT%%</small>
</p>
</div>

There were some reports of incompatibility between this plugin and early WordPress 2.7 betas.  Word is it’s working fine with the latest beta.  I probably won’t fix any incompatibilities until 2.7 final ships, but if anyone spots something they absolutely know will be a problem in 2.7 final, let me know.  Thanks!


Fixing Postcodes

Published 16 years, 3 months past

In case anyone’s interested, I finally updated the ZIP archive of all the countries and postcodes from the 2008 ALA survey.  The two files are sorted like before, but this time leading-zero postcodes haven’t had their leading zeroes stripped by Excel.  Oh, Excel.

I have learned way more about Excel’s “helpful” handling of CSV and text imports than I ever wanted to know.  The basic drill is, if you want to open a CSV or text file but don’t want Excel to be “helpful”, don’t drop the file onto Excel or double-click the file icon.  No no!  That would be too easy.

Instead, launch Excel, select “File > Open”, and then select the CSV or text file you want to open in the file browser.  Go through the Text Import Wizard carefully:

  1. Tell Excel that the file is delimited on the first screen.  (Or, if it isn’t, then don’t.  I bet it is, though.)
  2. Tell Excel what delimiter you’re using on the second screen.
  3. Then—this is the crucial bit—on the third Wizard screen, select the columns you don’t want Excel to “help” you with and set them to “Text”.  Be careful about setting all the columns as “Text”, though: if you have non-ASCII characters, Excel will “helpfully” replace their contents with octothorpes when you try to export the data later.  Such “help”!  It’s so “helpful”!

Yay!  An open file where the data is all in its original state!

Now you can save the file as an Excel workbook and it should (but please note my use of the word should) leave your data alone.  Ditto if you do “Save As…” to export to CSV or text again, which you might do if you run some calculations and want to capture the result in a basic, portable format.  But remember!  If you ever want to open those CSV/text files in Excel, you can’t just open them.  You have to go through the whole text-import process again.

So the survey files now contain actual useful data, especially for countries where postcodes can start with zeroes.  (Which is a lot of them.)  The files also have the usual bits of abuse that come along with daring to ask people to supply optional information, because I didn’t even try to filter that stuff out.  So, you know, naughty words ahead.

In part, I’m posting this to leave a record for anyone else who runs into the same problems I had, and also to remind myself of what has to be done next year.  Also to provide a heads-up to anyone who’d like to grab the fixed-up data and do fun mapping stuff with it, as did some commenters on the previous post.


JavaScript Will Save Us All

Published 16 years, 3 months past

A while back, I woke up one morning thinking, John Resig’s got some great CSS3 support in jQuery but it’s all forced into JS statements.  I should ask him if he could set things up like Dean EdwardsIE7 script so that the JS scans the author’s CSS, finds the advanced selectors, does any necessary backend juggling, and makes CSS3 selector support Transparently Just Work.  And then he could put that back into jQuery.

And then, after breakfast, I fired up my feed reader and saw Simon Willison‘s link to John Resig’s nascent Sizzle project.

I swear to Ged this is how it happened.

Personally, I can’t wait for Sizzle to be finished, because I’m absolutely going to use it and recommend its use far and wide.  As far as I’m concerned, though, it’s a first step into a larger world.

Think about it: most of the browser development work these days seems to be going into JavaScript performance.  Those engines are being overhauled and souped up and tuned and re-tuned to the point that performance is improving by orders of magnitude.  Scanning the DOM tree and doing things to it, which used to be slow and difficult, is becoming lightning-fast and easy.

So why not write JS to implement multiple background-image support in all browsers?  All that’s needed is to scan the CSS, find instances of multiple-image backgrounds, and then dynamically add divs, one per extra background image, to get the intended effect.

Just like that, you’ve used the browser’s JS to extend its CSS support.  This approach advances standards support in browsers from the ground up, instead of waiting for the browser teams to do it for us.

I suspect that not quite everything in CSS3 will be amenable to this approach, but you might be surprised.  Seems to me that you could do background sizing with some div-and-positioning tricks, and text-shadow could be supportable using a sIFR-like technique, though line breaks would be a bear to handle.  RGBa and HSLa colors could be simulated with creative element reworking and opacity, and HSL itself could be (mostly?) supported in IE with HSL-to-RGB calculations.  And so on.

There are two primary benefits here.  The first is obvious: we can stop waiting around for browser makers to give us what we want, thanks to their efforts on JS engines, and start using the advanced CSS we’ve been hearing about for years.  The second is that the process of finding out which parts of the spec work in the real world, and which fall down, will be greatly accelerated.  If it turns out nobody uses (say) background-clip, even given its availability via a CSS/JS library, then that’s worth knowing.

What I wonder is whether the W3C could be convinced that two JavaScript libraries supporting a given CSS module would constitute “interoperable implementations”, and thus allow the specification to move forward on the process track.  Or heck, what about considering a single library getting consistent support in two or more browsers as interoperable?  There’s a chance here to jump-start the entire process, front to back.

It is true that browsers without JavaScript will not get the advanced CSS effects, but older browsers don’t get our current CSS, and we use it anyway.  (Still older browsers don’t understand any CSS at all.)  It’s the same problem we’ve always faced, and everyone will face it differently.

We don’t have to restrict this to CSS, either.  As I showed with my href-anywhere demo, it’s possible to extend markup using JS.  (No, not without breaking validation: you’d need a custom DTD for that.  Hmmm.)  So it would be possible to use JS to, say, add audio and video support to currently-available browsers, and even older browsers.  All you’d have to do is convert the HTML5 element into HTML4 elements, dynamically writing out the needed attributes and so forth.  It might not be a perfect 1:1 translation, but it would likely be serviceable—and would tear down some of the highest barriers to adoption.

There’s more to consider, as well: the ability to create our very own “standards”.  Maybe you’ve always wanted a text-shake property, which jiggles the letters up and down randomly to look like the element got shaken up a bit.  Call it -myCSS-text-shake or something else with a proper “vendor” prefix—we’re all vendors now, baby!—and go to town.  Who knows?  If a property or markup element or attribute suddenly takes off like wildfire, it might well make it into a specification.  After all, the HTML 5 Working Group is now explicitly set up to prefer things which are implemented over things that are not.  Perhaps the CSS Working Group would move in a similar direction, given a world where we were all experimenting with our own ideas and seeing the best ideas gain widespread adoption.

In the end, as I said in Chicago last week, the triumph of standards (specifically, the DOM standard) will permit us to push standards support forward now, and save some standards that are currently dying on the vine.  All we have to do now is start pushing.  Sizzle is a start.  Who will take the next step, and the step after that?


Survey Mapping

Published 16 years, 4 months past

An anonymized copy of the data collected in the 2008 Survey has been turned over to some professional statisticians, as we did last year, and we’re waiting to hear back from them before moving into writing the full report.  But there’s no reason we can’t have a little fun while we wait, right?

So, calling all mapping ninjas: here’s a 136KB zip archive containing two tab-separated text files listing the countries and postcodes supplied by takers of the survey.  Before anyone has a privacy-related aneurysm, though, let me explain how they’re structured.

One of the two files is sorted alphabetically by country, with the postcodes as the second “column of data” (it’s country name, tab, postcode).  The second is the reverse: it’s sorted alphabetically by postcode, with the country names following each postcode.  This sorting should break any association they might have with the released data set, given that we won’t be including the postcodes in the released set.  (More on that in a moment.)

A word of warning: though I cleaned out some of the more obvious cases of people heaping abuse on us for even daring to ask the question, I can’t guarantee that the data set is perfectly clean.  There may be drops of bile here and there along with the usual collection of mistyped postcodes.  I know there’s at least one bit of obvious humor that I chose to leave in, so enjoy that when you find it.

We have two reasons to release this data this way at this point.  The first is to see what people do with it—heatmaps, perhaps, or one of those proportion-distortion maps, or a list of top-ten global postcodes or cities (or both).  Hey, go crazy!  I’d love to see a number of Google Maps/Yahoo! Maps/OpenMap/whatever mashups with this data.  That would be awesome.

The second reason is to ask for help with an API challenge.  Like I said, we’re not including the postcodes into the released data set.  What I would like to do instead is translate the postcodes into administrative regions (states, provinces, etc.) and put those in the data set.  That way, we can include things like “Ohio” and “British Columbia” and “Oaxaca”—thus providing a little bit better granularity in terms of geography, which was area of weakness in the 2007 survey.

Thanks to reading a couple of articles, I know how to do this for a single postcode.  But how does one do it for 26,457 postcode-and-country combinations without having to submit every single postcode as a separate request?  I’ve yet to see an explanation, and maybe there isn’t one, but I’d like to know either way.  And please, if someone does come up with a way, please show the work instead of just spitting out the result!  I’m hoping to learn a few things from the solution, but I obviously can’t do that without seeing the code.

One note: in cases where a postcode isn’t recognized or some kind of an error is returned, I’d like to have a little dash or “ERR” or something put in the result file.  That way we can get a handle on what percentage of the responses were resolvable.  Thanks.

Anyway, map and enjoy!


Eventful

Published 16 years, 5 months past

I hope I’m not too late to say so, but the early bird registration deadline for An Event Apart Chicago is this coming Monday.  Last chance to save $100 on the last show of 2008!

Between now and the Chicago event, I’ll be back in lovely Destin, Florida for this year’s edition of the CIW conference at which I spoke last year.  This time around I’ll be doing a blend of beginner and advanced CSS, plus a more reflective talk on the state of the web as I see it bothj now and in the near future.

In a like vein, I’ll be taking much the same topics and messages to the stage of Web Directions East in Tokyo, Japan.  Thanks to both personal and professional obligations, overseas travel is a rarity for me these days, and furthermore this will be only my second appearance in Asia (the first having been WWW2005), so this is a rare opportunity to catch me away from the Americas.  I think everyone should go.  C’mon, we’ve had people come all the way to the U.S. from places as far away as Bulgaria, New Zealand, Japan, and Singapore (twice!) to attend AEA, so what’s your excuse?


MW Latest Tweet 1.1b1

Published 16 years, 5 months past

There’s a new beta of MW Latest Tweet available.  It does four new things.  Four and a half if you count the new options setting as a half.

  1. All the files are in the mw_latest_tweet directory now, instead of having the plugin PHP outside of that directory like 1.0 did.  Yeah, I know, that should’ve been the case all along.  Sorry!  Learning on the job here.

    If you’re upgrading from 1.0, you should probably delete the 1.0 file and directory outright before uploading the 1.1b1 directory.  Alternatively, you should be able to upload 1.1b1, deactivate 1.0, activate 1.1b1, and then delete just the 1.0 PHP file.  I haven’t tried that, so I don’t know if it will actually work, but it seems like it should.

  2. URLs within a tweet are turned into hyperlinks for easy clickin’.  To go with this new feature, there’s a new option on the settings page to either shorten displayed URLs, like twitter.com does, or to not shorten them.  The default is to shorten, which means any URL 29 or more characters long gets shortened to 27 characters and gains a trailing ellipsis.  Again, like Twitter does it—although I used an ellipsis entity and not three periods.

    Note that if you upgrade from 1.0 to 1.1b1, this setting may default to “No” instead of “Yes”.  I’m not sure why, but it’s a pretty low-priority item right now.

  3. On a related note, @names are autolinked as well.  I’m using the pattern [A-Za-z0-9_] since that’s what Twitter says are valid characters for a username even though if you type in a grawlix on the signup page it will tell you, in nice bold green letters, that it’s available.

  4. If you want to see everything the plugin has cached, append &debug to the end of the plugin’s setting page URL and hit return.  You’ll get the settings page with a dump of the cached data at the end.  This is clumsy and will be much less so before 1.1 final.  I’m thinking click a link, enter debug mode.  Probably won’t go all AJAXy, though you never know.

So that’s the state of things.  Let me know if anything breaks.


Subverting WordPress

Published 16 years, 5 months past

I’m going to get back to posting here in just a bit with word of conference appearances (some overseas), plugin updates, and a small elegy, but first I need a little help with WordPress and subversion, if someone could spare the cycles to assist a newb.  (Which would be me.)

So I have meyerweb’s WordPress install all subversion-managed.  The only problem is that there are three core files I’ve had to hack (reasons upon request) and that makes updating really icky.  I fire off…

svn sw http://svn.automattic.com/wordpress/tags/2.6.1/ .

…(where 2.6.1 is replaced with whatever the latest version is) and it updates everything.  For my custom-altered files, though, I get diff files and a .mine file that has my old copy and then a copy that’s littered with diff markers, which cause PHP error-crashes, which takes down the site.  At least until I go in and copy the .mine files over the diffed-up files.

So: how do I do some kind of local checkin of the altered files so that I don’t attempt to post them back to the WordPress codebase (these are very specialized hacks) but future WordPress updates don’t break my site?  For extra ideal points, it would be great if those files were updated with my changes merged into the files.  If it helps, the files thus affected are /wp-blog-header.php, /wp-includes/classes.php, and /wp-admin/edit-form-advanced.php.  Thanks for any help!


Browse the Archive

Earlier Entries

Later Entries