Posts in the Tech Category

Finding Unicode

Published 12 years, 8 months past

A little while back, I was reading some text when I realized the hyphens didn’t look quite right.  A little too wide, I thought.  Not em-dash wide, but still…wide.  Wide-ish?  But when I copied some of the text into a BBEdit window, they looked just like the hyphens I typed into the document.

Of course, I know Unicode is filled with all manner of symbols and that the appearance of those symbols can vary from one font face to another.  So I changed the font face, made the size really huge, and behold: they were indeed different characters.  At this point, I was really curious about what I’d found.  What exactly was it?  How would I find out?

For the record, here’s the character in question:

Googling “−” and “− Unicode” got me nothing useful.  I knew I could try the Character Viewer in OS X, and eventually I did, but I was wondering if there was a better (read: lazier) solution.  I asked the Twittersphere for advice, and while I don’t know if these solutions are any lazier, here are the best of the suggestions I received.

  • Unicode Lookup, a site that lets you input or paste in any character and get a report on what it is and how one might call it in various encodings.
  • Richard Ishida’s UniView Lite, which does much the same as Unicode Lookup with the caveat that once you’ve input your character, you have to hit the “Chars” button, not the “Search” button.  The latter is apparently how you search Unicode character names for a word or other string, like “dash” or “quot”.
  • UnicodeChecker (OS X), a nice utility that includes a character list pane as well as the ability to type or paste a character into an input and instantly get its gritty details.

Any of those will tell you that the − in question is MINUS SIGN, codepoint 8722 (decimal) / 2212 (UTF-16 hex) / U+2212 (Unicode hex) / et cetera, et cetera.  Did you know it was designated in Unicode 1.1?  Now you do, thanks to UnicodeChecker and this post.  You’re welcome.

Update 2 Mar 12:  Philippe Wittenberg points out in the comments that you can add a UnicodeChecker service.  With that enabled, all you have to do is highlight a character, summon the contextual menu (right-click, for most of us), and have it shown in UnicodeChecker.  Now that’s the kind of laziness I was trying to attain!


“The Vendor Prefix Predicament” at ALA

Published 12 years, 9 months past

Published this morning in A List Apart #344: an interview I conducted with Tantek Çelik, web standards lead at Mozilla, on the subject of Mozilla’s plan to honor -webkit- prefixes on some properties in their mobile browser.  Even better: Lea Verou’s Every Time You Call a Proprietary Feature ‘CSS3,’ a Kitten Dies.  Please — think of the kittens!

My hope is that the interview brings clarity to a situation that has suffered from a number of misconceptions.  I do not necessarily hope that you agree with Tantek, nor for that matter do I hope you disagree.  While I did press him on certain points, my goal for the interview was to provide him a chance to supply information, and insight into his position.  If that job was done, then the reader can fairly evaluate the claims and plans presented.  What conclusion they reach is, as ever, up to them.

We’ve learned a lot over the past 15-20 years, but I’m not convinced the lessons have settled in deeply enough.  At any rate, there are interesting times ahead.  If you care at all about the course we chart through them, be involved now.  Discuss.  Deliberate.  Make your own case, or support someone else’s case if they’ve captured your thoughts.  Debate with someone who has a different case to make.  Don’t just sit back and assume everything will work out — for while things usually do work out, they don’t always work out for the best.  Push for the best.

And fix your browser-specific sites already!


Unfixed

Published 12 years, 9 months past

Right in the middle of AEA Atlanta — which was awesome, I really must say — there were two announcements that stand to invalidate (or at least greatly alter) portions of the talk I delivered.  One, which I believe came out as I was on stage, was the publication of the latest draft of the CSS3 Positioned Layout Module.  We’ll see if it triggers change or not; I haven’t read it yet.

The other was the publication of the minutes of the CSS Working Group meeting in Paris, where it was revealed that several vendors are about to support the -webkit- vendor prefix in their own very non-WebKit browsers.  Thus, to pick but a single random example, Firefox would throw a drop shadow on a heading whose entire author CSS is h1 {-webkit-box-shadow: 2px 5px 3px gray;}.

As an author, it sounds good as long as you haven’t really thought about it very hard, or if perhaps you have a very weak sense of the history of web standards and browser development.  It fits right in with the recurring question, “Why are we screwing around with prefixes when vendors should just implement properties completely correctly, or not at all?”  Those idealized end-states always sound great, but years of evidence (and reams upon reams of bug-charting material) indicate it’s an unrealistic approach.

As a vendor, it may be the least bad choice available in an ever-competitive marketplace.  After all, if there were a few million sites that you could render as intended if only the authors used your prefix instead of just one, which would you rather: embark on a protracted, massive awareness campaign that would probably be contradicted to death by people with their own axes to grind; or just support the damn prefix and move on with life?

The practical upshot is that browsers “supporting alien CSS vendor prefixes”, as Craig Grannell put it, seriously cripples the whole concept of vendor prefixes.  It may well reduce them to outright pointlessness.  I am on record as being a fan of vendor prefixes, and furthermore as someone who advocated for the formalization of prefixing as a part of the specification-approval process.  Of course I still think I had good ideas, but those ideas are currently being sliced to death on the shoals of reality.  Fingers can point all they like, but in the end what matters is what happened, not what should have happened if only we’d been a little smarter, a little more angelic, whatever.

I’ve seen a proposal that vendors agree to only support other prefixes in cases where they are un-prefixing their own support.  To continue the previous example, that would mean that when Firefox starts supporting the bare box-shadow, they will also support -webkit-box-shadow (and, one presumes, -ms-box-shadow and -o-box-shadow and so on).  That would mitigate the worst of the damage, and it’s probably worth trying.  It could well buy us a few years.

Developers are also trying to help repair the damage before it’s too late.  Christian Heilmann has launched an effort to get GitHub-based projects updated to stop being WebKit-only, and Aarron Gustafson has published a UNIX command to find all your CSS files containing webkit along with a call to update anything that’s not cross-browser friendly.  Others are making similar calls and recommendations.  You could use PrefixFree as a quick stopgap while going through the effort of doing manual updates.  You could make sure your CSS pre-processor, if that’s how you swing, is set up to do auto-prefixing.

Non-WebKit vendors are in a corner, and we helped put them there.  If the proposed prefix change is going to be forestalled, we have to get them out.  Doing that will take a lot of time and effort and awareness and, above all, widespread interest in doing the right thing.

Thus my fairly deep pessimism.  I’d love to be proven wrong, but I have to assume the vendors will push ahead with this regardless.  It’s what we did at Netscape ten years ago, and almost certainly would have done despite any outcry.  I don’t mean to denigrate or undermine any of the efforts I mentioned before — they’re absolutely worth doing even if every non-WebKit browser starts supporting -webkit- properties next week.  If nothing else, it will serve as evidence of your commitment to professional craftsmanship.  The real question is: how many of your fellow developers come close to that level of commitment?

And I identify that as the real question because it’s the question vendors are asking — must ask — themselves, and the answer serves as the compass for their course.


Vigilance and Victory

Published 12 years, 10 months past

After the blackout on Wednesday, it seems that the political tides are shifting against SOPA and the PROTECT IP Act — as of this writing, there are now more members of Congress in opposition to the bills than in favor.  That’s good news.

I wil reiterate something I said on Twitter, though:  the members of tech community, particularly those who are intimately familiar with the basic protocols of the Internet, need to keep working on ways to counteract SOPA/PIPA.  What form that would take, I’m not sure.  Maybe a truly distributed DNS system, one that can’t be selectively filtered by any one government or other entity.  I’m not an expert in the area, so I don’t actually know if that’s feasible.  There’s probably a much more clever solution, or better still suite of solutions.

The point is, SOPA and PIPA may soon go down to defeat, but they will return in another form.  There is too much money in the hands of those who first drafted these bills, and they’re willing to give a fair chunk of that money to those who introduced the bills in Congress.  Never mistake winning a battle with winning the war.  As someone else observed on Twitter (and I wish I could find their tweet now), the Internet community fought hard against the DMCA, and it’s been US law for more than a decade.

By all means, take a moment to applaud the widespread and effective community effort to oppose and (hopefully) defeat bad legislation.  When that’s done, take notes on what worked and what didn’t, and then prepare to fight again and harder.  Fill the gap between battles with outreach to your elected representatives and with efforts to educate the non-technical in your life to explain why SOPA/PIPA were and are a bad idea.

Days of action feel great.  Months of effort are wearying.  But it’s only the latter that can slowly and painfully bring about long-term change.


Standing In Opposition

Published 12 years, 10 months past

Though I certainly do not support SOPA or the PROTECT IP Act (the complete, rather contrived acronym of PIPA), I will not be blacking out meyerweb.  This is largely because the vast majority of my readers already know about these bills, and very likely oppose them; as for anyone who visits but does not know about these bills, I feel I’ll do better to speak out than to black out.  (Which is not a criticism of those who do black out.  We all fight in our own ways.)

Instead, I will reproduce here the letter I attempted to send via contact form to my state Senator this morning, and which I will print out and send by regular postal service later today.

Senator Brown:

I grew up in Lexington, Ohio.  I moved to Cleveland in pursuit of a career, and found success.  Through a combination of good luck and hard work, I have (rather to my surprise) become a widely recognized name in my field, which is web design and development.  Along the way, I co-founded a web design conference with an even more widely respected colleague that has become one of the most respected and successful web design events in the world.  This business is headquartered in Ohio — I live in Cleveland Heights with my family, and I intend to stay here until I either retire to Florida or die.  Politically I’m best described as a moderate independent, though I do tend to lean a bit to the left.

As you can imagine, given my line of work, I have an opinion regarding the PROTECT IP Act which you have co-sponsored.  The aims of PROTECT IP are understandable, but the methods are unacceptable.  Put another way, if you wish to combat piracy and intellectual property theft, there are far better ways to go about it.

As someone with twenty years of technical experience with the Internet and nearly as many with the web — I started creating web pages in late 1993 — please believe me when I say the enforcement mechanisms of the bill are deeply flawed and attack the very features of the Web that make it what it is.  They are akin to making a criminal of anyone who gives directions to a park where drug trafficking takes place, regardless of whether they knew about the drug trafficking.  You don’t have to be in favor of drug trafficking to oppose that.

This is not a case where tweaking a clause or two will fix it; correction in this case would mean starting from scratch.  Again, the objection is not with the general intent of the bill.  It is with how the bill goes about achieving those aims.

If you would like to discuss this with me further, I would be delighted to do whatever I can to help, but in any event I strongly urge you to reconsider your co-sponsorship of the PROTECT IP Act.

Thank you for your time and consideration.

Eric A. Meyer (http://meyerweb.com/)

Partner and co-founder, An Event Apart (http://aneventapart.com/)

If you agree that the PROTECT IP Act is poorly conceived, find out if your senator supports PIPA.  If they do, get in touch and let them know about your opposition.  If they oppose the bill, get in touch and thank them for their opposition.  If their support or opposition isn’t known, get in touch and ask them to please speak out in opposition to the bill.

As others have said, postal letters are better than phone calls, which are in turn better than e-mail, which is in turn better than signing petitions.  Do what you can, please.  The web site you save might be your own.


The Survey, 2011

Published 13 years, 1 week past

Back on Tuesday, A List Apart opened the 2011 edition of The Survey for People Who Make Web Sites, the fifth annual effort to learn more about the people who work in the web industry.  If you haven’t taken it yet, please do so!  It should take about ten minutes

I’m proud to have been a part of this effort since its inaugural launch back in 2007.  It’s a major undertaking, mostly in analyzing the data and turning that into a detailed report, but it’s more than worth the time and effort.  Before the Survey, we really didn’t know very much about who we were as a field of practice, and without it we wouldn’t have as clear a picture of who we are today.

There have been growing pains, of course, chief among them UCCASS, the survey software we’ve been using since the outset.  Its limitations and lack of updates finally pushed us to find another platform, and we chose to move over to Polldaddy.  Many thanks to the Polldaddy team for giving the survey a home and helping me figure out the best strategies for recreating the survey.  (And also for putting up with my occasionally testy feature and support requests.  Sorry, gang.)

Due to differences between UCCASS and Polldaddy, we ended up restructuring the survey into two distinct paths.  I think this change actually speeds the process of taking the survey.  I’m pretty sure just about anyone could get through it in under ten minutes.

Unsurprisingly, participation in the survey has dropped over the years; last year’s survey had a bit more than half as many respondents as the first-ever survey back in 2007.  Tellingly, the actual results have been pretty consistent over the years.  I’d really like to see how those results stand up to an increase of respondents, so please:

  • If you haven’t taken the survey yet, kindly set aside ten minutes and do so.
  • If you have taken the survey, thank you.  Now, spread the word!  If you could post a quick link to any mailing lists, web forums, newsgroups, or other professional communities in which you participate, it will be an enormous help.  The more practitioners we have answer, the better the results.

As always, the survey will close a month after it opened; and as always, a detailed report will be published — feel free to peruse the reports from 2007 (PDF), 2008, 2009, and 2010 — along with anonymized data sets for independent analysis.  Together, they form a picture, but one that is still being drawn.  Please help us to add the most essential detail — you!


Searching For Mark Pilgrim

Published 13 years, 1 month past

[[ MARK IS FINE and his work is not lost.  Please see the update and addendum later in the post.  — E. ]]

Just yesterday, I took a screenshot of the title page of Dive Into HTML5 to include in a presentation as a highly recommended resource.  Now it’s gone.  That site, along with all the other “Dive Into…” sites (Accessibility, Python, Greasemonkey, etc.) and addictionis.org, is returning an HTTP “410 Gone” message.  Mark’s Github, Google+, Reddit, and Twitter accounts have all been deleted.  And attempts to email him have been bounced back.

This is very reminiscent of Why the Lucky Stiff’s infosuicide, and it’s honestly shocking.  If anyone is in direct contact with Mark, please let me know that he’s okay via comment here or by direct e-mail, even if his internet presence has been erased.  As much as I hate for the world to lose all of the incredible information he’s created and shared, that would be as nothing compared to losing the man himself.

“Embracing HTTP error code 410 means embracing the impermanence of all things.”

 — Mark Pilgrim, March 27, 2003 (diveintomark.com)

Update 5 Oct 11: Jason Scott just tweeted the following:

Mark Pilgrim is alive/annoyed we called the police. Please stand down and give the man privacy and space, and thanks everyone for caring.

The communication was specifically verified, it was him, and that’s that. That was the single hardest decision I’ve had to make this year.

So there you have it.  I’m sorry to have helped annoy Mark, am very glad he’s well, and sincerely hope that we can all give him the privacy he desires.  And with that, I’m going to sleep now.  Thank you, everyone.

Addendum 5 Oct 11: Several people have asked me if I know why Mark took this step.  I don’t.  I have three comments in the moderation queue all claiming to be from Mark, only one of which even approaches sounding credible, and none of which have any sort of verification.  Unless Mark contacts me directly, or changes his server to return an explanatory note instead of or along with a 410, or something similar, I’m as much in the dark as anyone else.  If he’d like to talk with me about it, he’s certainly more than welcome to do so, but he’s under no obligation to explain himself to me or anyone else.

Mirrors of Mark’s work have started appearing (see the comments for some of them) and so his legacy, if not his presence, will not be lost.  I am assuming that he has simply withdrawn from digital life, his reasons are his own, and that if he feels interested in explaining those reasons he will find a way to do so.  Regardless, his path is his own and we should leave him to walk it as he chooses.


CSS Modules Throughout History

Published 13 years, 2 months past

For very little reason other than I was curious to see what resulted, I’ve compiled a list of various CSS modules’ version histories, and then used CSS to turn it into a set of timelines.  It’s kind of a low-cost way to visualize the life cycle of and energy going into various CSS modules.

I’ll warn you up front that as of this writing the user interaction is not ideal, and in some places the presentation suffers from too much content overlap.  This happens in timelines where lots of drafts were released in a short period of time.  (In one case, two related drafts were released on the same day!)  I intend to clean up the presentation, but for the moment I’m still fiddling with ideas.  The obvious one is to rotate every other spec name by -45 degrees, but that looked kind of awful.  I suspect I’ll end up doing some sort of timestamp comparison and if they’re too close together, toss on a class that invokes a -45deg rotation.  Or maybe I’ll get fancier!

The interaction is a little tougher to improve, given what’s being done here, but I have a few ideas for making things, if not perfect, at least less twitchy.

I should also note that not every module is listed as I write this:  I intentionally left off modules whose last update was 2006 or earlier.  I may add them at the end, or put them into a separate set of timelines.  The historian in me definitely wants to see them included, but the shadow of a UX person who dwells somewhere in the furthest corners of my head wanted to avoid as much clutter as possible.  We’ll see which one wins.

Anyway, somewhat like the browser release timeline, which is probably going to freeze in the face of the rapid-versioning schemes that are all the rage these days, I had fun combining my love of the web and my love of history.  I should do it more often, really.  The irony is that I don’t really have the time.


Browse the Archive

Earlier Entries

Later Entries