Posts in the Web Category

CWRU2K

Published 5 years, 3 months past

Before I tell you this story of January 1st, 2000, I need to back things up a few months into mid-1999.  I was working at Case Western Reserve University as a Hypermedia Systems Specialist, which was the closest the university’s job title patterns could get to my actual job which was, no irony or shade, campus Webmaster.  I was in charge of www.cwru.edu and providing support to departments who wanted a Web presence on our server, among many other things.  My fellow Digital Media Services employees provided similar support for other library and university systems.

So in mid-1999, we were deep in the throes of Y2K certification.  The young’uns in the audience won’t remember this, but to avoid loss of data and services when the year rolled from 1999 to 2000, pretty much the entire computer industry was engaged in a deep audit of every computer and program under our care.  There’s really been nothing quite like it, before or since, but the job got done.  In fact, it got done so well, barely anything adverse happened and some misguided people now think it was all a hoax designed to extract hefty consulting fees, instead of the successful global preventative effort it actually was.

As for us, pretty much everything on the Web side was fine.  And then, in the middle of one of our staff meetings about Y2K certification, John Sully said something to the effect of, “Wouldn’t it be funny if the Web server suddenly thought it was 1900 and you had to use a telegraph to connect to it?”

We all laughed and riffed on the concept for a bit and then went back to Serious Work Topics, but the idea stuck in my head.  What would a 1900-era Web site look like?  Technology issues aside, it wasn’t a complete paradox: the ancestor parts of CWRU, the Case Institute of Technology and the Western Reserve University, had long existed by 1900 (founded 1880 and 1826, respectively).  The campus photos would be black and white rather than color, but there would still be photos.  The visual aesthetic might be different, but…

I decided so make it a reality, and CWRU2K was born.  With the help of the staff at University Archives and a flatbed scanner I hauled across campus on a loading dolly, I scanned a couple dozen photos from the period 1897-1900 — basically, all those that were known to be in the public domain, and which depicted the kinds of scenes you might put on a Web site’s home page.

Then I reskinned the home page to look more “old-timey” without completely altering the layout or look.  Instead of university-logo blues and gold, I recolored everything to be wood-grain.  Helvetica was replaced with an “Old West” font in the images, of which there were several, mostly in the form of MM_swapimage-style rollover buttons.  In the process, I actually had to introduce two Y2K bugs to the code we used to generate dates on the page, so that instead of saying 2000 they’d actually say 1899 or 1900.  I altered other things to match the time, like altering the phone number to use two-letters-then-numbers format while still retaining full international dialing information and adding little curlicues to things.  Well before the holidays, everything was ready.

The files were staged, a cron job was set up, and at midnight on January 1st, 2000, the home page seamlessly switched over to its 1900 incarnation.  That’s a static snapshot of the page, so the picture will never change, but I have a gallery of all the pictures that could appear, along with their captions, which I strove to write in that deadpan stating-the-obvious tone the late 19th Century always brings to my mind.  (And take a close look at the team photo of The Rough Riders!)

In hindsight, our mistake was most likely in adding a similarly deadpan note to the home page that read:

Year 2000 Issues

Despite our best efforts at averting Y2K problems, it seems that our Web server now believes that it is January of 1900. Please be advised that we are working diligently on the problem and hope to have it fixed soon.

I say that was a mistake because it was quoted verbatim in stories at Wired and The Washington Post about Y2K glitches.  Where they said we’d actually suffered a real, unintentional Y2K bug, with Wired giving us points for having “guts” in publicly calling “a glitch a glitch”.  After I emailed both reporters to explain the situation and point them to our press release about it, The Washington Post did publish a correction a few days later, buried in a bottom corner of page A16 or something like that.  So far as I know, Wired never acknowledged the error.

CWRU2K lasted a little more than a day.  Although we’d planned to leave it up until the end of January, we were ordered to take it down on January 2nd.  My boss, Ron Ryan, was directed to put a note in my Permanent Record.  The general attitude Ron conveyed to me was along the lines of, “The administration says it’s clever and all, but it’s time to go back to the regular home page.  Next time, we need to ask permission rather than forgiveness.”

What we didn’t know at the time was how close he’d come to being fired.  At Ron’s retirement party last year, the guy who was his boss on January 2nd, 2000, Jim Barker, told Ron that Jim had been summoned that day to a Vice President’s office, read the riot act, and was sent away with instructions to “fire Ron’s ass”.  Fortunately, Jim… didn’t.  And then kept it to himself for almost 20 years.

There were a number of other consequences.  We got a quite a bit of email about it, some in on the joke, others taking it as seriously as Wired.  There’s a particularly lovely note partway down that page from the widow of a Professor Emeritus, and have to admit that I still smile over the props we got from folks on the NANOG mailing list.  I took an offer to join a startup a couple of months later, and while I was probably ready to move on in any case, the CWRU2K episode — or rather, the administration’s reaction to it — helped push me to make the jump.  I was probably being a little juvenile and over-reacting, but I guess you do that when you’re younger.  (And I probably would have left the next year regardless, when I got the offer to join Netscape as a Standards Evangelist.  Actual job title!)

So, that’s the story of how Y2K affected me.  There are some things I probably would have done differently if I had it to do over, but I’m 100% glad we did it.

  • CWRU2K was published on .
  • It was assigned to the History and Web categories.
  • There has been one reply.

Securing Web Sites Made Them Less Accessible

Published 6 years, 8 months past

In the middle of last month (July 2018), I found myself staring at a projector screen, waiting once again to see if Wikipedia would load.  If I was lucky, the page started rendering 15-20 seconds after I sent the request.  If not, it could be closer to 60 seconds, assuming the browser didn’t just time out on the connection.  I saw a lot of “the server stopped responding” over the course of a few days.

It wasn’t just Wikipedia, either.  CNN International had similar load times.  So did Google’s main search page.  Even this here site, with minimal assets to load, took a minimum of 10 seconds to start rendering.  Usually longer.

In 2018?  Yes.  In rural Uganda, where I was improvising an introduction to web development for a class of vocational students, that’s the reality.  They can have a computer lab full of Dell desktops running Windows or rows of Raspberry Pis running Ubuntu or whatever setup there is, but when satellites in geosynchronous earth orbit are your only source of internet, you wait.  And wait.  And wait.

I want to explain why — and far more importantly, how we’ve made that experience interminably worse and more expensive in the name of our comfort and security.

First, please consider the enormously constrained nature of satellite internet access.  If you’re already familiar with this world, skip ahead a few paragraphs; but if not, permit me a brief description of the challenges.

For geosynchronous-satellite internet access, the speed of light become a factor in ping times: just having the signals propagate through a mixture of vacuum and atmosphere chews up approximately half a second of travel time over roughly 89,000 miles (~152,000km).  If that all that distance were vacuum, your absolute floor for ping latency is about 506 milliseconds.

That’s just the time for the signals to make two round trips to geosynchronous orbit and back.  In reality, there are the times to route the packets on either end, and the re-transmission time at the satellite itself.

But that’s not the real connection killer in most cases: packet loss is.  After all, these packets are going to orbit and back.  Lots of things along those long and lonely signal paths can cause the packets to get dropped.  50% packet loss is not uncommon; 80% is not unexpected.

So, you’re losing half your packets (or more), and the packets that aren’t lost have latency times around two-thirds of a second (or more).  Each.

That’s reason enough to set up a local caching server.  Another, even more pressing reason is that pretty much all commercial satellite connections come with data caps.  Where I was, their cap was 50GB/month.  Beyond that, they could either pay overages, or just not have data until the next month.  So if you can locally cache URLs so that they only count against your data usage the first time they’re loaded, you do that.  And someone had, for the school where I was teaching.

But there I stood anyway, hoping my requests to load simple web pages would bear fruit, and I could continue teaching basic web principles to a group of vocational students.  Because Wikipedia wouldn’t cache.  Google wouldn’t cache.  Meyerweb wouldn’t cache.  Almost nothing would cache.

Why?

HTTPS.

A local caching server, meant to speed up commonly-requested sites and reduce bandwidth usage, is a “man in the middle”.  HTTPS, which by design prevents man-in-the-middle attacks, utterly breaks local caching servers.  So I kept waiting and waiting for remote resources, eating into that month’s data cap with every request.

The drive to force every site on the web to HTTPS has pushed the web further away from the next billion users — not to mention a whole lot of the previous half-billion.  I saw a piece that claimed, “Investing in HTTPS makes it faster, cheaper, and easier for everyone.”  If you define “everyone” as people with gigabit fiber access, sure.  Maybe it’s even true for most of those whose last mile is copper.  But for people beyond the reach of glass and wire, every word of that claim was wrong.

If this is a surprise to you, you’re by no means alone.  I hadn’t heard anything about it, so I asked a number of colleagues if they knew about the problem.  Not only had they not, they all reacted the same way I did: this must not be an actual problem, or we’d have heard about it!  But no.

Can we do anything?  For users of up-to-date browsers, yes: service workers create a “good” man in the middle that sidesteps the HTTPS problem, so far as I understand.  So if you’re serving content over HTTPS, creating a service worker should be one of your top priorities right now, even if it’s just to do straightforward local caching and nothing fancier.  I haven’t gotten one up for meyerweb yet, but I will do so very soon.

That’s great for modern browsers, but not everyone has the option to be modern.  Sometimes they’re constrained by old operating systems to run older browsers, ones with no service-worker support: a lab full of Windows XP machines limited to IE8, for example.  Or on even older machines, running Windows 95 or other operating systems of that era.  Those are most likely to be the very people who are in situations where they’re limited to satellite internet or other similarly slow services with unforgiving data caps.  Even in the highly-wired world, you can still find older installs of operating systems and browsers: public libraries, to pick but one example.  Securing the web literally made it less accessible to many, many people around the world.

Beyond deploying service workers and hoping those struggling to bridge the digital divide make it across, I don’t really have a solution here.  I think HTTPS is probably a net positive overall, and I don’t know what we could have done better.  All I know is that I saw, first-hand, the negative externality that was pushed onto people far, far away from our data centers and our thoughts.

My thanks to Tim Kadlec and Ethan Marcotte for their feedback and insight while I was drafting this post, and to Lara Hogan and Aaron Gustafson for their early assistance wth my research.


CSS: The Definitive Guide, 4th Edition

Published 7 years, 9 months past

On Monday, July 3rd, as I sat in the living room of a house just a bit north of New York City, I pushed the last writing and editing changes to CSS: The Definitive Guide, Fourth Edition and notified the production department at O’Reilly that it was ready.

All twenty chapters, three appendices, and associated front matter are now in their hands.

It’s been a long and difficult journey to get here.  Back in 2011-2012, I started updating chapters and releasing them as standalone books, for those who wanted to grab specific topics early.  In mid-2013, I had to stop all work on the book, and wasn’t really able to get back into it until mid-2015.  At that point, I realized that several new chapters had to be added — for example, when I started out on this edition, Flexbox and Grid were pie-in-the-sky ideas that might or might not come to pass.  Feature queries weren’t a thing, back then.  Filters and masks and blend modes were single-browser at best, when I started out.  And forget about really complex list counters.

Now all those topics (and more!) have chapters, or at least major sections.  Had I not been delayed two years, those topics might not have made it into the fourth edition.  Instead, they’re in there, and this edition may well end up twice as long as the previous edition.

I also might not have brought on a co-author, the inestimable Estelle Weyl.  If not for her contribution in new material and her close, expert review of the chapters I’d already written, this book might have been another year in the making.  The Guide was always my baby, but I couldn’t be happier that I decided to share it with Estelle, nor prouder that her name will be on the cover with mine.

Speaking of major changes, I probably wouldn’t have learned AsciiDoc, nor adopted Atom as an authoring environment (I still use BBEdit for heavy-lift text processing, as well as most of my coding).  O’Reilly used to be a “give us your Word docs!” shop like everyone else, but that toolchain doesn’t really exist any more, from what I can tell.  In fact, the first few chapters I’d given them were in Word.  When I finally returned to writing, they had to give me those chapters back as AsciiDoc exports, so I could make updates and push them to O’Reilly’s internal repository.  The files I created to create figures in the book went into their own public repository, which I’ll get to reorganizing once the text is all settled and the figure numbers are locked in.  (Primary to do: create chapter lists of figures, linked to the specific files that were used to create those figures.  Secondary to do: clean up the cruft.)

As of this moment, the table of contents is:

  • Preface
  1. CSS and Documents
  2. Selectors
  3. Specificity and the Cascade
  4. Values, Units, and Colors
  5. Fonts
  6. Text Properties
  7. Basic Visual Formatting
  8. Padding, Borders, Outlines, and Margins
  9. Colors, Backgrounds, and Gradients
  10. Floating and Shapes
  11. Positioning
  12. Flexible Box Layout
  13. Grid Layout
  14. Table Layout in CSS
  15. Lists and Generated Content
  16. Transforms
  17. Transitions
  18. Animation
  19. Filters, Blending, Clipping, and Masking
  20. Media-Dependent Styling
  • Appendix A: Animatable Properties
  • Appendix B: Basic Property Reference
  • Appendix C: Color Equivalence Table

Disclaimer: the ordering and titles could potentially change, though I have no expectation of either.

I don’t have a specific timeline for release as yet, but as soon as I get one, I’ll let everyone know in a post here, as well as the usual channels.  I expect it to be relatively speedy, like the next couple of months.  Once production does their thing, we’ll get it through the QC process — checking to make sure the figures are in the right places and sizes, making sure no syntax formatting got borked, that kind of thing — and then it’ll be a matter of getting it out the door.

And just in case anyone saw there was news about O’Reilly’s change in distribution and is wondering what that means: you can still buy the paper book or the e-book from your favorite retailer, whether that’s Amazon or someone else.  You just won’t be able to buy direct from O’Reilly any more, except in the sense that subscribing to their Safari service gives you access to the e-book.  That does mean a tiny bit less in royalties for me and Estelle, since direct paper sales were always the highest earners.  Then again, hardly anyone ever bought their paper copies direct from O’Reilly, so honestly, the difference will be negligible.  I might’ve been able to buy an extra cup of coffee or two, if I drank coffee.

It feels…well, honestly, it feels weird to have finally reached this point, after such a long time.  I wish I’d gotten here sooner for a whole host of reasons, but this is where we are, and regardless of anything else, I’m proud of what Estelle and I have created.  I’m really looking forward to getting into your hands.


Practical CSS Grid

Published 8 years, 1 month past

…In the run-up to Grid support being released to the public, I was focused on learning and teaching Grid, creating test cases, and using it to build figures for publication.  And then, March 7th, 2017, it shipped to the public in Firefox 52.  I tweeted and posted an article and demo I’d put together the night before, and sat back in wonderment that the day had finally come to pass.  After 20+ years of CSS, finally, a real layout system, a set of properties and values designed from the outset for that purpose.

And then I decided, more or less in that moment, to convert my personal site to use Grid for its main-level layout.

Me, writing for A List Apart, taking you on a detailed, illustrated walkthrough of how I added CSS Grid layout to meyerweb.com, while still leaving the old layout in place for non-Grid browsers.  As I write this, Grid is available in the latest public releases of Firefox, Chrome, and Opera, with Safari likely to follow suit within the next few weeks.  Assuming the last holds true, that’s four major browsers shipping major support in the space of one month.  As Jen Simmons hashtagged it, it’s a new day in browser collaboration.

As I’ve said before, I understand being hesitant.  Based on our field’s history, it’s natural to assume that Grid as it stands now is buggy, incomplete, and will have a long ramp-up period before it’s usable.  I am here to tell you, as someone who was there for almost all of that history, Grid is different.  There are areas of incompleteness, but they’re features that haven’t been developed yet, not bugs or omissions.  I’m literally using Grid in production, right now, on this site, and the layout is fine in both Grid browsers and non-Grid browsers (as the article describes).  I’m very likely to add it to our production styles over at An Event Apart in the near future.  I’d probably have done so already, except every second of AEA-related work time I have is consumed by preparations for AEA Seattle (read: tearing my new talk apart and putting it back together with a better structure).

Again, I get being wary.  I do.  We’re used to new CSS stuff taking two years to get up to usefulness.  Not this time.  It’s ready right now.

So: dive in.  Soak up.  Enjoy.  Go forth, and Grid.


Doubled Grids

Published 8 years, 1 month past

Chrome 57 released yesterday, not quite a week ahead of schedule, with Grid support enabled.  So that’s two browsers with Grid support in the space of two days, including the most popular browser in the world right now.  Safari has it enabled in Technology Preview builds, and just blogged an introduction to Grid, so it definitely feels like it’ll be landing there soon as well.  No word from Edge, so far as I know.

I did discover a Chrome bug in Grid this morning, albeit one that might be fairly rare.  I filed a bug report, but the upshot is this: most or all of an affected page is rendered, and then gets blanked.  I ran into a similar bug earlier this year, and it seemed to affect people semi-randomly — others with the same OS as me didn’t see it, and others with different OSes did see it.  This leads me to suspect it’s related to graphics cards, but I have no proof of that at all.  If you can reproduce the bug, and more importantly come up with a reliable way to fix it, please comment on the Chromium bug!


Proper Filter Installation

Published 8 years, 2 months past

I ran into an interesting conceptual dilemma yesterday while I was building a test page for the filter property.  In a way, it reminded me a bit of Dan Cederholm’s classic SimpleQuizzes, even though it’s not about HTML.

First, a bit of background.  When I set up test suites, directories of example files, or even browser-based figures for a book, I tend to follow a pattern of having all the HTML (or, rarely, XML) files in a single directory.  Inside that directory, I’ll have subdirectories containing the style sheets, images, fonts, and so on.  I tend to call these c/, i/, and f/, but imagine they’re called css/, images/, and fonts/ if that helps.  The names aren’t particularly important — it’s the organizational structure that matters here.

So, with that groundwork in place, here’s what happened: I wrote some SVG filters, and put them into an SVG file for referencing via the url(filters.svg#fragment) filter function pattern.  So I had this SVG file that didn’t actually visually render anything; it was just a collection of filters to be applied via CSS.

I clicked-and-held the mouse button, preparing to drag the file into a subdirectory…and suddenly stopped.  Where should I put it?  css/, or images/?  It clearly wasn’t CSS.  Even if I were to rename css/ to styles/, are filter definitions really styles?  I’m not sure they are.  But then, what is an image that has no visual output?

(Insert “one hand clapping” reference here.)

Sure, I could set up an svg/ subdirectory, but then I’d just end up with SVG images (as in, SVGs that actually have visual output) mingled in with the filter-file… and furthermore, segregated from all the other images, the PNGs and JPGs still hanging out in images/.  That seems weird.

I could establish a filters/ subdirectory, but that seems like overkill when I only planned to have a single file containing all the filters; and besides, I’m not in the habit of creating subdirectories that relate to only a single HTML file.

I could dodge the whole question by establishing a generic assets/ subdirectory, although I’ve long been of the opinion assets/, when it isn’t used to toss in all of your assets classes in their own subdirectories, is just a fancy alias for misc/.  And I dislike tossing stuff into misc/, the messy kitchen junk drawer of any project.

I came to a decision in the end, but I’m not going to tell you what it was, because I’m curious: what would you do in that situation?


A New Online Course: Design for Humanity

Published 8 years, 2 months past

As longtime readers know, my professional focus has been very different the past couple of years.  Ever since the events of 2013-2014, I started focusing on design and meeting the needs of people — not just users, but complete people with complex lives.  I teamed up with Sara Wachter-Boettcher to write Design for Real Life, and  presented talks at An Event Apart called “Designing for Crisis” (2015) and “Compassionate Design” (2016; video to come).  I’m not done with CSS — I should have news on that front fairly soon, in fact — but a lot of my focus has been on the practice of design, and how we approach it.

To that end, I’ve been spending large chunks of the last few months creating and recording a course for Udemy called “Design for Humanity”, and it’s now available.  The course is based very heavily on Design for Real Life, following a similar structure and using many of the examples from the book, plus new examples that have emerged since the book was published, but it takes a different approach to learning.  Think of it as a companion piece.  If you’re an auditory processor as opposed to a visual processor, for example, I think the course will really work for you.

Who is the course for?  I put it like this:

This course will help you if you are part of the design process for a product or service, whether that’s a website, an app, an overall experience, or a physical product. You might be a product designer or product manager, an entrepreneur or work in customer service or user research, an experience designer or an information architect. If you have been impacted by bad design and want to do better, this course is for you.

I know a lot of courses promise they’re just right for whoever you are, no really, but in this case I honestly feel like that’s true for anyone who has an interest in design, whether that’s visual design, system design, or content design.  It’s about changing perspective and patterns of thinking — something many readers of the book, and people who’ve heard my talks, say they’ve experienced.

If you’ve already bought the book, then thank you!  Be on the lookout for email from A Book Apart containing a special code that will give you a nice discount on the course.  If you haven’t picked up the book yet, that’s no problem.  I have a code for readers of meyerweb as well: use MW_BLOG to get 20% off the sale price of the course, bringing it down to a mere $12, or slightly less than $3 per hour!  (The code is good through February 28th, so you have a month to take advantage of it.)

If you like the course, please do consider picking up the book.  It’s a handy format to have close to hand, and to lend to others.  On the flip side, if you liked the book, please consider checking out the course, containing as it does new material and some evolution of thinking.

And either way, whether it’s the book or the course, if you liked what you learned, please take a moment to write a short review, say something on the interwebs, and generally spread the word to colleagues and co-workers.  The more people who hear the message, the better we’ll become as an industry at not just designing, but designing with care and humanity.


Element Dragging in Web Inspectors

Published 8 years, 3 months past
Yesterday, I was looking at an existing page, wondering if it would be improved by rearranging some of the elements.  I was about to fire up the git engine (spawn a branch, check it out, do edits, preview them, commit changes, etc., etc.) when I got a weird thought: could I just drag elements around in the Web Inspector in my browser of choice, Firefox Nightly, so as to quickly try out various changes without having to open an editor?  Turns out the answer is yes, as demonstrated in this video!
Youtube: “Dragging elements in Firefox Nightly’s Web Inspector”
Since I recorded the video, I’ve learned that this same capability exists in public-release Firefox, and has been in Chrome for a while.  It’s probably been in Firefox for a while, too.  What I was surprised to find was how many other people were similarly surprised that this is possible, which is why I made the video.  It’s probably easier to understand to video if it’s full screen, or at least expanded, but I think the basic idea gets across even in small-screen format.  Share and enjoy!

Browse the Archive

Earlier Entries

Later Entries