Posts in the Web Category

Sixth Annual Blue Beanie Day

Published 11 years, 11 months past

I just recently stumbled across a years-ago post where I said, almost as an aside:

Web design isn’t like chemistry, where the precipitate either forms or it doesn’t. If chemical engineers had to work in conditions equivalent to web developers, they’d have to mix their solutions in several parallel universes, each one with different physical constants, and get the same result in all of them.

While that’s still true, the constants are a lot less divergent these days.  The parallel universes that are web browsers are much closer to unity than once they were.

Remember those days?  When major web sites had a home page with two links: one for Netscape users to enter, the other for IE users?

Madness.

We know better now, of course.  Thanks to early pioneers like the organizers of the Web Standards Project, the path of web development was bent to a much saner course.  We still have little glitches and frustrations, of course, but it could be so unimaginably worse.  We know that it could be, because it was, once.

Along the way, the book cover of my friend and business partner’s book, Designing With Web Standards, gave rise to Blue Beanie Day, the day on which we give visible presence to our solidarity with the idea that web standards make possible the web as we know it.  Pictures go up on Twitter, Instagram, and Flickr with the tag #bbd12, and can be added to the Flickr group if you post there.

In this rapidly unfolding age of multiple device platforms and web access experiences, standards are more important than ever, even as they come under renewed pressure.  There will always be those who proclaim that standards are a failed process, an obstruction, an anachronism.  The desire to go faster and be shinier will always tempt developers to run down proprietary box canyons.

But so too will there always be those of us who remember the madness that lies that way.  Come November 30th, thousands of us will don our blue beanies.  I hope you’ll be among us.

Image © Kevin Cornell.  Used with permission.


The Web Behind #1

Published 12 years, 1 month past

Last Thursday was the first episode of The Web Behind, which was also episode #35 of The Web Ahead, and I couldn’t really have been much happier with it.  John Allsopp made it brilliant by being brilliant, as always.  To spend 80 minutes talking with someone with so much experience and insight will always be an act of pure joy. and we were beyond thrilled that he used the occasion to announce his Web History Timeline Project — a web-based timline which anyone can enrich by easily adding milestones.

The episode is up on 5by5, where there are a whole bunch of links to things that came up in the conversation; as well as on iTunes — so pick your favorite channel and listen away!  If you are an iTunes listener, Jen and I would be deeply grateful if you could give the show a quick review and rating, but please don’t feel that you’re somehow obligated to do so in order to listen!  We’ll be more than happy if people simply find all this as interesting as we do, and happier still if you find the shows interesting enough to subscribe via RSS or iTunes.

Guests are lining up for the next few shows, which will come about once every other week.  Jen is preparing a standalone web site where we’ll be able to talk about new and upcoming episodes, have a show archive, provide show information and wiki pages, and much more.  Great stories and perspectives are being uncovered.  Exciting times!


John Allsopp to Inaugurate ‘The Web Behind’

Published 12 years, 2 months past

Jen Simmons and I are very pleased to announce that our first guest on The Web Behind will be none other than John Allsopp.

Hailing from Sydney, Australia, John by himself has seen and done more on the web than most web teams put together.  First encountering the web in the early 1990s, he built one of the very first CSS tools, Style Master, and a number of other web development tools; published a wealth of information like support charts and free courses; wrote the deeply insightful and far-seeing article “A Dao of Web Design”; influenced the course of the Web Standards Project; and founded a successful international conference series that continues to this day.

We’re incredibly excited to have John as our inaugural guest, and hope you’ll join us for the live recording this Thursday, September 20th at 6pm Eastern/3pm Pacific.  That’s also Friday, September 21st at 8am Sydney time, and 2200 UTC if you want to calculate your own local offsets.  The time zone dance is the reason we’re recording the first show at that particular time.  Moving forward, the plan is to record on Wednesdays, usually mid-afternoon (US Eastern) but sometimes in the morning — again, depending on the time zones of our guests.

Be able to say you were there when it all started:  please join us for the live recording, and subscribe to get the finished podcasts as they’re released.  We already have some great guests lined up for subsequent shows — more on that as we firm up dates and times — and some interesting plans for the future.  We really hope you’ll be there with us!


The Web Behind

Published 12 years, 2 months past

Whenever I meet a new person and we get to talking about our personal lives, one of the things that seems to surprise people the most, besides the fact that I live in Cleveland and not in New York City or San Francisco, is that I have a Bachelor’s of Art in History.  The closest I came to Computer Science was a minor concentration in Artifical Intelligence, and in all honesty it was more of a philosophical study.

To me, history is vital.  As a species, we’ve made a plethora of mistakes and done myriad things right, and the record (and outcomes) of those successes and failures can tell us a great deal about how we got to where we are as well as where we might go.  (Also, from a narrative standpoint, history is the greatest and most authentic story we’ve ever told — even the parts that are untrue.)  The combination of that interest and my ongoing passion for the web is what led me to join the W3C’s recently formed Web History Community Group, where efforts to preserve (digital) historical artifacts are slowly coalescing.

But even more importantly, it’s what has led me to establish a new web history podcast in association with Jen Simmons of The Web Ahead.  The goal of this podcast, which is a subset of The Web Ahead, is to interview people who made the web today possible.  The guests will be authors, programmers, designers, vendors, toolmakers, hobbyists, academics: some whose names you’ll instantly recognize, and others who you’ve never heard of even though they helped shape everything we do.  We want to bring you their stories, get their insights and perspectives, and find out what they’ve been doing of late.  The Mac community has folklore.org; I hope that this podcast will help start to build an similar archive for the web.  You can hear us talk about it a bit on The Web Ahead #34, where we announce our first guest as well as the date and time for our first show!  (Semi-spoiler: it’s next week.)

Jen and I have took to calling this project The Web Behind in our emails, and the name stuck.  It really is a subset of The Web Ahead, so if you’re already subscribed to The Web Ahead, then episodes of The Web Behind will come to you automatically!  If not, and you’re interested, then please subscribe!  We already have some great guests lined up, and will announce the first few very soon.

I haven’t been this excited about a new project in quite some time, so I very much hope you’ll join Jen and me (and be patient as I relearn my radio chops) for a look back that will help to illuminate both our present and our future.


Results From The Survey, 2011

Published 12 years, 2 months past

On Tuesday — and I fully acknowledge the fact that it’s taken me until now to blog this is emblematic — A List Apart published the results of the fifth annual A List Apart Survey for People Who Make Websites.  This includes anonymized data sets for the bulk of the survey, as well as standalone data sets for postcodes and a few of the answer sets for questions that allowed “Other” as an option.  (Note that these last were shuffled-then-sorted, and were not filtered for potentially objectionable content.  They are what they are.)

If you really want the TL;DR version, the results are largely the same as they’ve been in the past.  The gender ratio, for example, is still in the vicinity of 5-to-1 male-to-female, with half a percent answering Other (a new option in the 2011 survey).  Most respondents are in the age range 19-44 and live in the United States.  And so on.  That might sound like I’m bored by the results, but their very consistency even as the number of respondents has dropped over five years fascinates me.

It did take quite a while to publish the results.  I feel personally very bad about the delay, because I run the numbers and it just took me a long time to get them run.  Partly, I admit, I put it off because some of the numbers in previous years were a royal pain to generate, thanks in part to the way the data is formatted and in part because of the fine slicing that was done.  This was finally addressed through various means, and now the report is done.  I can’t thank Sara Wachter-Boettcher enough for her keen editing eye and firm strategic oversight, not to mention writing all the commentary text to accompany the charts.  If not for her, the report might still not be done.  And of course without the unwavering support and dedication of Jeffrey Zeldman, the survey might not have existed at all.

So we’ve done this five times, and the results are consistent.  What now?  There is much to discuss, and the answers aren’t yet clear; but I do know that this project brings me more professional pride than almost anything I’ve ever done.  It tells us a lot about ourselves — and in a profession that is often characterized by single-person “web teams” and distributed offices, one which may never have a certification process or other form of registry, that’s something valuable.  Thank you for helping us see ourselves a little bit more clearly.


Firefox Failing localStorage Due to Cookie Policy

Published 12 years, 6 months past

I recently stumbled over a subtle interaction between cookie policies and localStorage in Firefox.  Herewith, I document it for anyone who might run into the same problem (all four of you) as well as for you JS developers who are using, or thinking about using, locally stored data.  Also, there’s a Bugzilla report, so either it’ll get fixed and then this won’t be a problem or else it will get resolved WONTFIX and I’ll have to figure out what to do next.

The basic problem is, every newfangled “try code out for yourself” site I hit is just failing in Firefox 11 and 12.  Dabblet, for example, just returns a big blank page with the toolbar across the top, and none of the top-right buttons work except for the Help (“?”) button.  And I write all that in the present tense because the problem still exists as I write this.

What’s happening is that any attempt to access localStorage, whether writing or reading, returns a security error.  Here’s an anonymized example from Firefox’s error console:

Error: uncaught exception: [Exception... "Security error"  code: "1000" nsresult: "0x805303e8 (NS_ERROR_DOM_SECURITY_ERR)"  location: "http://example.com/code.js Line: 666"]

When you go to line 666, you discover it refers to localStorage.  Usually it’s a write attempt, but reading gets you the same error.

But here’s the thing: it only does this if your browser preferences are set so that, when it comes to accepting cookies, the “Keep until:” option is set to “ask me every time”.  If you change that to either of the other two options, then localStorage can be written and read without incident.  No security errors.  Switch it back to “ask me every time”, and the security errors come back.

Just to cover all the bases regarding my configuration:

  1. Firefox is not in Private Browsing mode.
  2. dom.storage.default_quota is 5120.
  3. dom.storage.enabled is true.

Also:  yes, I have my cookie policy set that way on purpose.  It might not work for you, but it definitely works for me.  “Just change your cookie policy” is the new “use a different browser” (which is the new “get a better OS”) and it ain’t gonna fly here.

To my way of thinking, this behavior doesn’t conform to step one of 4.3 The localStorage attribute, which states:

The user agent may throw a SecurityError exception instead of returning a Storage object if the request violates a policy decision (e.g. if the user agent is configured to not allow the page to persist data).

I haven’t configured anything to not persist data — quite the opposite — and my policy decision is not to refuse cookies, it’s to ask me about expiration times so I can decide how I want a given cookie handled.  It seems to me that, given my current preferences, Firefox ought to ask me if I want to accept local storage of data whenever a script tries to write to localStorage.  If that’s somehow impossible, then there should at least be a global preference for how I want to handle localStorage actions.

Of course, that’s all true only if localStorage data has expiration times.  If it doesn’t, then I’ve already said I’ll accept cookies, even from third-party sites.  I just want a say on their expiration times (or, if I choose, to deny the cookie through the dialog box; it’s an option).  I’m not entirely clear on this, so if someone can point to hard information on whether localStorage does or doesn’t time out, that would be fantastic.  I did see:

User agents should expire data from the local storage areas only for security reasons or when requested to do so by the user.

…from the same section, which to me sounds like localStorage doesn’t have expiration times, but maybe there’s another bit I haven’t seen that casts a new light on things.  As always, tender application of the Clue-by-Four of Enlightenment is welcome.

Okay, so the point of all this: if you’re getting localStorage failures in Firefox, check your cookies expiration policy.  If that’s the problem, then at least you know how to fix it — or, as in my case, why you’ll continue to have localStorage problems for the next little while.  Furthermore, if you’re writing JS that interacts with localStorage or a similar local-data technology, please make sure you’re looking for security exceptions and other errors, and planning appropriate fallbacks.


Invented Elements

Published 12 years, 8 months past

This morning I caught a pointer to TypeButter, which is a jQuery library that does “optical kerning” in an attempt to improve the appearance of type.  I’m not going to get into its design utility because I’m not qualified; I only notice kerning either when it’s set insanely wide or when it crosses over into keming.  I suppose I’ve been looking at web type for so many years, it looks normal to me now.  (Well, almost normal, but I’m not going to get into my personal typographic idiosyncrasies now.)

My reason to bring this up is that I’m very interested by how TypeButter accomplishes its kerning: it inserts kern elements with inline style attributes that bear letter-spacing values.  Not span elements, kern elements.  No, you didn’t miss an HTML5 news bite; there is no kern element, nor am I aware of a plan for one.  TypeButter basically invents a specific-purpose element.

I believe I understand the reasoning.  Had they used span, they would’ve likely tripped over existing author styles that apply to span.  Browsers these days don’t really have a problem accepting and styling arbitrary elements, and any that do would simply render type their usual way.  Because the markup is script-generated, markup validation services don’t throw conniption fits.  There might well be browser performance problems, particularly if you optically kern all the things, but used in moderation (say, on headings) I wouldn’t expect too much of a hit.

The one potential drawback I can see, as articulated by Jake Archibald, is the possibility of a future kern element that might have different effects, or at least be styled by future author CSS and thus get picked up by TypeButter’s kerns.  The currently accepted way to avoid that sort of problem is to prefix with x-, as in x-kern.  Personally, I find it deeply unlikely that there will ever be an official kern element; it’s too presentationally focused.  But, of course, one never knows.

If TypeButter shifted to generating x-kern before reaching v1.0 final, I doubt it would degrade the TypeButter experience at all, and it would indeed be more future-proof.  It’s likely worth doing, if only to set a good example for libraries to follow, unless of course there’s downside I haven’t thought of yet.  It’s definitely worth discussing, because as more browser enhancements are written, this sort of issue will come up more and more.  Settling on some community best practices could save us some trouble down the road.

Update 23 Mar 12: it turns out custom elements are not as simple as we might prefer; see the comment below for details.  That throws a fairly large wrench into the gears, and requires further contemplation.


“The Vendor Prefix Predicament” at ALA

Published 12 years, 9 months past

Published this morning in A List Apart #344: an interview I conducted with Tantek Çelik, web standards lead at Mozilla, on the subject of Mozilla’s plan to honor -webkit- prefixes on some properties in their mobile browser.  Even better: Lea Verou’s Every Time You Call a Proprietary Feature ‘CSS3,’ a Kitten Dies.  Please — think of the kittens!

My hope is that the interview brings clarity to a situation that has suffered from a number of misconceptions.  I do not necessarily hope that you agree with Tantek, nor for that matter do I hope you disagree.  While I did press him on certain points, my goal for the interview was to provide him a chance to supply information, and insight into his position.  If that job was done, then the reader can fairly evaluate the claims and plans presented.  What conclusion they reach is, as ever, up to them.

We’ve learned a lot over the past 15-20 years, but I’m not convinced the lessons have settled in deeply enough.  At any rate, there are interesting times ahead.  If you care at all about the course we chart through them, be involved now.  Discuss.  Deliberate.  Make your own case, or support someone else’s case if they’ve captured your thoughts.  Debate with someone who has a different case to make.  Don’t just sit back and assume everything will work out — for while things usually do work out, they don’t always work out for the best.  Push for the best.

And fix your browser-specific sites already!


Browse the Archive

Earlier Entries

Later Entries