Posts in the Web Category

John Allsopp to Inaugurate ‘The Web Behind’

Published 11 years, 7 months past

Jen Simmons and I are very pleased to announce that our first guest on The Web Behind will be none other than John Allsopp.

Hailing from Sydney, Australia, John by himself has seen and done more on the web than most web teams put together.  First encountering the web in the early 1990s, he built one of the very first CSS tools, Style Master, and a number of other web development tools; published a wealth of information like support charts and free courses; wrote the deeply insightful and far-seeing article “A Dao of Web Design”; influenced the course of the Web Standards Project; and founded a successful international conference series that continues to this day.

We’re incredibly excited to have John as our inaugural guest, and hope you’ll join us for the live recording this Thursday, September 20th at 6pm Eastern/3pm Pacific.  That’s also Friday, September 21st at 8am Sydney time, and 2200 UTC if you want to calculate your own local offsets.  The time zone dance is the reason we’re recording the first show at that particular time.  Moving forward, the plan is to record on Wednesdays, usually mid-afternoon (US Eastern) but sometimes in the morning — again, depending on the time zones of our guests.

Be able to say you were there when it all started:  please join us for the live recording, and subscribe to get the finished podcasts as they’re released.  We already have some great guests lined up for subsequent shows — more on that as we firm up dates and times — and some interesting plans for the future.  We really hope you’ll be there with us!


The Web Behind

Published 11 years, 7 months past

Whenever I meet a new person and we get to talking about our personal lives, one of the things that seems to surprise people the most, besides the fact that I live in Cleveland and not in New York City or San Francisco, is that I have a Bachelor’s of Art in History.  The closest I came to Computer Science was a minor concentration in Artifical Intelligence, and in all honesty it was more of a philosophical study.

To me, history is vital.  As a species, we’ve made a plethora of mistakes and done myriad things right, and the record (and outcomes) of those successes and failures can tell us a great deal about how we got to where we are as well as where we might go.  (Also, from a narrative standpoint, history is the greatest and most authentic story we’ve ever told — even the parts that are untrue.)  The combination of that interest and my ongoing passion for the web is what led me to join the W3C’s recently formed Web History Community Group, where efforts to preserve (digital) historical artifacts are slowly coalescing.

But even more importantly, it’s what has led me to establish a new web history podcast in association with Jen Simmons of The Web Ahead.  The goal of this podcast, which is a subset of The Web Ahead, is to interview people who made the web today possible.  The guests will be authors, programmers, designers, vendors, toolmakers, hobbyists, academics: some whose names you’ll instantly recognize, and others who you’ve never heard of even though they helped shape everything we do.  We want to bring you their stories, get their insights and perspectives, and find out what they’ve been doing of late.  The Mac community has folklore.org; I hope that this podcast will help start to build an similar archive for the web.  You can hear us talk about it a bit on The Web Ahead #34, where we announce our first guest as well as the date and time for our first show!  (Semi-spoiler: it’s next week.)

Jen and I have took to calling this project The Web Behind in our emails, and the name stuck.  It really is a subset of The Web Ahead, so if you’re already subscribed to The Web Ahead, then episodes of The Web Behind will come to you automatically!  If not, and you’re interested, then please subscribe!  We already have some great guests lined up, and will announce the first few very soon.

I haven’t been this excited about a new project in quite some time, so I very much hope you’ll join Jen and me (and be patient as I relearn my radio chops) for a look back that will help to illuminate both our present and our future.


Results From The Survey, 2011

Published 11 years, 7 months past

On Tuesday — and I fully acknowledge the fact that it’s taken me until now to blog this is emblematic — A List Apart published the results of the fifth annual A List Apart Survey for People Who Make Websites.  This includes anonymized data sets for the bulk of the survey, as well as standalone data sets for postcodes and a few of the answer sets for questions that allowed “Other” as an option.  (Note that these last were shuffled-then-sorted, and were not filtered for potentially objectionable content.  They are what they are.)

If you really want the TL;DR version, the results are largely the same as they’ve been in the past.  The gender ratio, for example, is still in the vicinity of 5-to-1 male-to-female, with half a percent answering Other (a new option in the 2011 survey).  Most respondents are in the age range 19-44 and live in the United States.  And so on.  That might sound like I’m bored by the results, but their very consistency even as the number of respondents has dropped over five years fascinates me.

It did take quite a while to publish the results.  I feel personally very bad about the delay, because I run the numbers and it just took me a long time to get them run.  Partly, I admit, I put it off because some of the numbers in previous years were a royal pain to generate, thanks in part to the way the data is formatted and in part because of the fine slicing that was done.  This was finally addressed through various means, and now the report is done.  I can’t thank Sara Wachter-Boettcher enough for her keen editing eye and firm strategic oversight, not to mention writing all the commentary text to accompany the charts.  If not for her, the report might still not be done.  And of course without the unwavering support and dedication of Jeffrey Zeldman, the survey might not have existed at all.

So we’ve done this five times, and the results are consistent.  What now?  There is much to discuss, and the answers aren’t yet clear; but I do know that this project brings me more professional pride than almost anything I’ve ever done.  It tells us a lot about ourselves — and in a profession that is often characterized by single-person “web teams” and distributed offices, one which may never have a certification process or other form of registry, that’s something valuable.  Thank you for helping us see ourselves a little bit more clearly.


Firefox Failing localStorage Due to Cookie Policy

Published 12 years, 1 week past

I recently stumbled over a subtle interaction between cookie policies and localStorage in Firefox.  Herewith, I document it for anyone who might run into the same problem (all four of you) as well as for you JS developers who are using, or thinking about using, locally stored data.  Also, there’s a Bugzilla report, so either it’ll get fixed and then this won’t be a problem or else it will get resolved WONTFIX and I’ll have to figure out what to do next.

The basic problem is, every newfangled “try code out for yourself” site I hit is just failing in Firefox 11 and 12.  Dabblet, for example, just returns a big blank page with the toolbar across the top, and none of the top-right buttons work except for the Help (“?”) button.  And I write all that in the present tense because the problem still exists as I write this.

What’s happening is that any attempt to access localStorage, whether writing or reading, returns a security error.  Here’s an anonymized example from Firefox’s error console:

Error: uncaught exception: [Exception... "Security error"  code: "1000" nsresult: "0x805303e8 (NS_ERROR_DOM_SECURITY_ERR)"  location: "http://example.com/code.js Line: 666"]

When you go to line 666, you discover it refers to localStorage.  Usually it’s a write attempt, but reading gets you the same error.

But here’s the thing: it only does this if your browser preferences are set so that, when it comes to accepting cookies, the “Keep until:” option is set to “ask me every time”.  If you change that to either of the other two options, then localStorage can be written and read without incident.  No security errors.  Switch it back to “ask me every time”, and the security errors come back.

Just to cover all the bases regarding my configuration:

  1. Firefox is not in Private Browsing mode.
  2. dom.storage.default_quota is 5120.
  3. dom.storage.enabled is true.

Also:  yes, I have my cookie policy set that way on purpose.  It might not work for you, but it definitely works for me.  “Just change your cookie policy” is the new “use a different browser” (which is the new “get a better OS”) and it ain’t gonna fly here.

To my way of thinking, this behavior doesn’t conform to step one of 4.3 The localStorage attribute, which states:

The user agent may throw a SecurityError exception instead of returning a Storage object if the request violates a policy decision (e.g. if the user agent is configured to not allow the page to persist data).

I haven’t configured anything to not persist data — quite the opposite — and my policy decision is not to refuse cookies, it’s to ask me about expiration times so I can decide how I want a given cookie handled.  It seems to me that, given my current preferences, Firefox ought to ask me if I want to accept local storage of data whenever a script tries to write to localStorage.  If that’s somehow impossible, then there should at least be a global preference for how I want to handle localStorage actions.

Of course, that’s all true only if localStorage data has expiration times.  If it doesn’t, then I’ve already said I’ll accept cookies, even from third-party sites.  I just want a say on their expiration times (or, if I choose, to deny the cookie through the dialog box; it’s an option).  I’m not entirely clear on this, so if someone can point to hard information on whether localStorage does or doesn’t time out, that would be fantastic.  I did see:

User agents should expire data from the local storage areas only for security reasons or when requested to do so by the user.

…from the same section, which to me sounds like localStorage doesn’t have expiration times, but maybe there’s another bit I haven’t seen that casts a new light on things.  As always, tender application of the Clue-by-Four of Enlightenment is welcome.

Okay, so the point of all this: if you’re getting localStorage failures in Firefox, check your cookies expiration policy.  If that’s the problem, then at least you know how to fix it — or, as in my case, why you’ll continue to have localStorage problems for the next little while.  Furthermore, if you’re writing JS that interacts with localStorage or a similar local-data technology, please make sure you’re looking for security exceptions and other errors, and planning appropriate fallbacks.


Invented Elements

Published 12 years, 1 month past

This morning I caught a pointer to TypeButter, which is a jQuery library that does “optical kerning” in an attempt to improve the appearance of type.  I’m not going to get into its design utility because I’m not qualified; I only notice kerning either when it’s set insanely wide or when it crosses over into keming.  I suppose I’ve been looking at web type for so many years, it looks normal to me now.  (Well, almost normal, but I’m not going to get into my personal typographic idiosyncrasies now.)

My reason to bring this up is that I’m very interested by how TypeButter accomplishes its kerning: it inserts kern elements with inline style attributes that bear letter-spacing values.  Not span elements, kern elements.  No, you didn’t miss an HTML5 news bite; there is no kern element, nor am I aware of a plan for one.  TypeButter basically invents a specific-purpose element.

I believe I understand the reasoning.  Had they used span, they would’ve likely tripped over existing author styles that apply to span.  Browsers these days don’t really have a problem accepting and styling arbitrary elements, and any that do would simply render type their usual way.  Because the markup is script-generated, markup validation services don’t throw conniption fits.  There might well be browser performance problems, particularly if you optically kern all the things, but used in moderation (say, on headings) I wouldn’t expect too much of a hit.

The one potential drawback I can see, as articulated by Jake Archibald, is the possibility of a future kern element that might have different effects, or at least be styled by future author CSS and thus get picked up by TypeButter’s kerns.  The currently accepted way to avoid that sort of problem is to prefix with x-, as in x-kern.  Personally, I find it deeply unlikely that there will ever be an official kern element; it’s too presentationally focused.  But, of course, one never knows.

If TypeButter shifted to generating x-kern before reaching v1.0 final, I doubt it would degrade the TypeButter experience at all, and it would indeed be more future-proof.  It’s likely worth doing, if only to set a good example for libraries to follow, unless of course there’s downside I haven’t thought of yet.  It’s definitely worth discussing, because as more browser enhancements are written, this sort of issue will come up more and more.  Settling on some community best practices could save us some trouble down the road.

Update 23 Mar 12: it turns out custom elements are not as simple as we might prefer; see the comment below for details.  That throws a fairly large wrench into the gears, and requires further contemplation.


“The Vendor Prefix Predicament” at ALA

Published 12 years, 2 months past

Published this morning in A List Apart #344: an interview I conducted with Tantek Çelik, web standards lead at Mozilla, on the subject of Mozilla’s plan to honor -webkit- prefixes on some properties in their mobile browser.  Even better: Lea Verou’s Every Time You Call a Proprietary Feature ‘CSS3,’ a Kitten Dies.  Please — think of the kittens!

My hope is that the interview brings clarity to a situation that has suffered from a number of misconceptions.  I do not necessarily hope that you agree with Tantek, nor for that matter do I hope you disagree.  While I did press him on certain points, my goal for the interview was to provide him a chance to supply information, and insight into his position.  If that job was done, then the reader can fairly evaluate the claims and plans presented.  What conclusion they reach is, as ever, up to them.

We’ve learned a lot over the past 15-20 years, but I’m not convinced the lessons have settled in deeply enough.  At any rate, there are interesting times ahead.  If you care at all about the course we chart through them, be involved now.  Discuss.  Deliberate.  Make your own case, or support someone else’s case if they’ve captured your thoughts.  Debate with someone who has a different case to make.  Don’t just sit back and assume everything will work out — for while things usually do work out, they don’t always work out for the best.  Push for the best.

And fix your browser-specific sites already!


Unfixed

Published 12 years, 2 months past

Right in the middle of AEA Atlanta — which was awesome, I really must say — there were two announcements that stand to invalidate (or at least greatly alter) portions of the talk I delivered.  One, which I believe came out as I was on stage, was the publication of the latest draft of the CSS3 Positioned Layout Module.  We’ll see if it triggers change or not; I haven’t read it yet.

The other was the publication of the minutes of the CSS Working Group meeting in Paris, where it was revealed that several vendors are about to support the -webkit- vendor prefix in their own very non-WebKit browsers.  Thus, to pick but a single random example, Firefox would throw a drop shadow on a heading whose entire author CSS is h1 {-webkit-box-shadow: 2px 5px 3px gray;}.

As an author, it sounds good as long as you haven’t really thought about it very hard, or if perhaps you have a very weak sense of the history of web standards and browser development.  It fits right in with the recurring question, “Why are we screwing around with prefixes when vendors should just implement properties completely correctly, or not at all?”  Those idealized end-states always sound great, but years of evidence (and reams upon reams of bug-charting material) indicate it’s an unrealistic approach.

As a vendor, it may be the least bad choice available in an ever-competitive marketplace.  After all, if there were a few million sites that you could render as intended if only the authors used your prefix instead of just one, which would you rather: embark on a protracted, massive awareness campaign that would probably be contradicted to death by people with their own axes to grind; or just support the damn prefix and move on with life?

The practical upshot is that browsers “supporting alien CSS vendor prefixes”, as Craig Grannell put it, seriously cripples the whole concept of vendor prefixes.  It may well reduce them to outright pointlessness.  I am on record as being a fan of vendor prefixes, and furthermore as someone who advocated for the formalization of prefixing as a part of the specification-approval process.  Of course I still think I had good ideas, but those ideas are currently being sliced to death on the shoals of reality.  Fingers can point all they like, but in the end what matters is what happened, not what should have happened if only we’d been a little smarter, a little more angelic, whatever.

I’ve seen a proposal that vendors agree to only support other prefixes in cases where they are un-prefixing their own support.  To continue the previous example, that would mean that when Firefox starts supporting the bare box-shadow, they will also support -webkit-box-shadow (and, one presumes, -ms-box-shadow and -o-box-shadow and so on).  That would mitigate the worst of the damage, and it’s probably worth trying.  It could well buy us a few years.

Developers are also trying to help repair the damage before it’s too late.  Christian Heilmann has launched an effort to get GitHub-based projects updated to stop being WebKit-only, and Aarron Gustafson has published a UNIX command to find all your CSS files containing webkit along with a call to update anything that’s not cross-browser friendly.  Others are making similar calls and recommendations.  You could use PrefixFree as a quick stopgap while going through the effort of doing manual updates.  You could make sure your CSS pre-processor, if that’s how you swing, is set up to do auto-prefixing.

Non-WebKit vendors are in a corner, and we helped put them there.  If the proposed prefix change is going to be forestalled, we have to get them out.  Doing that will take a lot of time and effort and awareness and, above all, widespread interest in doing the right thing.

Thus my fairly deep pessimism.  I’d love to be proven wrong, but I have to assume the vendors will push ahead with this regardless.  It’s what we did at Netscape ten years ago, and almost certainly would have done despite any outcry.  I don’t mean to denigrate or undermine any of the efforts I mentioned before — they’re absolutely worth doing even if every non-WebKit browser starts supporting -webkit- properties next week.  If nothing else, it will serve as evidence of your commitment to professional craftsmanship.  The real question is: how many of your fellow developers come close to that level of commitment?

And I identify that as the real question because it’s the question vendors are asking — must ask — themselves, and the answer serves as the compass for their course.


Vigilance and Victory

Published 12 years, 3 months past

After the blackout on Wednesday, it seems that the political tides are shifting against SOPA and the PROTECT IP Act — as of this writing, there are now more members of Congress in opposition to the bills than in favor.  That’s good news.

I wil reiterate something I said on Twitter, though:  the members of tech community, particularly those who are intimately familiar with the basic protocols of the Internet, need to keep working on ways to counteract SOPA/PIPA.  What form that would take, I’m not sure.  Maybe a truly distributed DNS system, one that can’t be selectively filtered by any one government or other entity.  I’m not an expert in the area, so I don’t actually know if that’s feasible.  There’s probably a much more clever solution, or better still suite of solutions.

The point is, SOPA and PIPA may soon go down to defeat, but they will return in another form.  There is too much money in the hands of those who first drafted these bills, and they’re willing to give a fair chunk of that money to those who introduced the bills in Congress.  Never mistake winning a battle with winning the war.  As someone else observed on Twitter (and I wish I could find their tweet now), the Internet community fought hard against the DMCA, and it’s been US law for more than a decade.

By all means, take a moment to applaud the widespread and effective community effort to oppose and (hopefully) defeat bad legislation.  When that’s done, take notes on what worked and what didn’t, and then prepare to fight again and harder.  Fill the gap between battles with outreach to your elected representatives and with efforts to educate the non-technical in your life to explain why SOPA/PIPA were and are a bad idea.

Days of action feel great.  Months of effort are wearying.  But it’s only the latter that can slowly and painfully bring about long-term change.


Browse the Archive

Earlier Entries

Later Entries