Posts in the Standards Category

SES Chicago Report

Published 19 years, 11 months past

Due to some weather-related travel upheavals, I didn’t get to spend as much time at SES Chicago as I would have liked—I ended up flying in Tuesday afternoon, speaking before lunch Wednesday, and leaving Wednesday evening.  Still, the panel went very well, the speakers were quite gracious, and I didn’t even need a fire extinguisher.

Based on what was said in the panel and the fleeting conversations I was able to have (sometimes from the podium) with Matt Bailey and Shari Thurow, here’s what I took away from the conference:

  • Semantic markup does not hurt your search engine rankings.  It may even provide a small lift.  However, the lift will be tiny, and it isn’t always a semantic consideration.  Search engines seem to use markup the same way humans do: headings and elements that cause increased presentational weight, such as <strong> and <i>, will raise slightly the weight of the content within said elements.  So even the presentational-effect elements can have an effect.  They also stated that if you’re using elements solely to increase ranking, you’re playing a loser’s game.
  • The earlier content sits in the document, the more weight it has… but again, this is a very minor effect.
  • Hyperlink title attribute and longdesc text has no effect, positive or negative, on search engine ranking.  The advice given was to have a link’s title text be the same as its content, and that anything you’d put into a longdesc should just go into the page itself.  (Remember: this advice is ruthlessly practical and specific to search-engine ranking, not based on any notions of purity.)
  • Having a valid document neither helps nor hurts ranking; validation is completely ignored.  The (paraphrased) statement from a Yahoo! representative was that validation doesn’t help find better information for the user, because good information can (and usually does) appear on non-valid pages.
  • Search engine indexers don’t care about smaller pages, although the people who run them do care about reducing bandwidth consumption, so they like smaller pages for that reason.  But not enough to make it affect rankings.
  • A lot of things that we take for granted as being good, like image-replacement techniques and Flash replacement techniques, are technologically indistinguishable from search-engine spamming techniques.  (Mostly because these things are often used for the purpose of spamming search engines.)  Things like throwing the text offscreen in order to show a background image, hiding layers of text for dynamic display, and so forth are all grouped together under the SEO-industry term “cloaking”.  As the Yahoo! guy put it, 95% of cloaking is done for the specific purpose of spamming or otherwise rigging search engine results.  So the 5% of it that isn’t… is us.  And we’re taking a tiny risk of search-engine banishment because our “make this look pretty” tools are so often used for evil.

Reading that last point, you might be wondering: how much of a risk are you taking?  Very little, as it turns out.  Search engine indexers do not try to detect cloaking and then slam you into a blacklist—at least, they don’t do that right now.  To get booted from a search engine, someone needs to have reported your site as trying to scam search engines.  If that happens, then extra detection and evaluation measures kick in.  That’s when you’re at risk of being blacklisted.  Note that it takes, in effect, a tattletale to make this even a possibility.  It’s also the case that if you find you’ve been booted and you think the booting unfair, you can appeal for a human review of your site.

So using standards will not, of itself, increase your risk of banishment from Google.  If someone claims to Google that you’re a dirty search spammer, there’s a small but nonzero chance that you’ll get booted, especially if you’re using things like hidden text.  If you do get booted and tell Google you aren’t a spammer, and they check and agree with you, you’ll be back in the index immediately.

So there’s no real reason to panic.  But it’s still a bit dismaying to realize that the very same tools we use to make the Web better are much more often used to pollute it.  I don’t suppose it’s surprising, though.

Due to my radically compressed schedule, I was unfortunately not able to ask most of the questions people suggested, and for that I’m very sorry.  There was some talk of having me present at future SES conferences, however, so hopefully I’ll have more chances in the future.  I’ll also work the e-mail contacts I developed to see what I can divine.


Unbreaking the Web

Published 19 years, 11 months past

While I was in Florida with my family visiting both sets of parents, Tristan Nitot published an article titled “How Microsoft can support CSS2 without breaking the Web“.  In it, Tristan points to a comment made by Gary Schare, Director of Windows Product Management at Microsoft, which was:

We could change the CSS support and many other standards elements within the browser rendering platform. But in doing so, we would also potentially  break a lot of things.

(from Microsoft Windows Exec Talks IE, Firefox)

Tristan then goes on to refute this line of thinking.  Generally speaking, I’m entirely in agreement with him.  (As a disclaimer, Tristan and I worked together as members of the Netscape Standards Evangelism Team, and Tristan asked me for feedback on his article before it was published.)

Here’s the thing: in the Windows world, Explorer already significantly upgraded its standards support four different times.  The most recent such upgrade was called IE6.  That was the version that first added DOCTYPE switching to IE/Win.  At that time, there were a great many changes made to the standards support, nearly every one for the better.  For example, in standards mode, you could no longer throw around unitless numbers and have them interpreted as pixels, because that violated the CSS specification.  You couldn’t set a height or width for an inline non-replaced element, because that too was incorrect.  The interpretation of font-size keywords was changed to reflect the CSS specification instead of the HTML font-sizing regime.  The box model was altered to follow CSS instead of the old IE way.  In short, there was all kinds of stuff in there that would “break a lot of things”.

The Web rather steadfastly declined to be broken.  Oh, sure, there were pages whose layout was altered—not many, thanks to the way DOCTYPE switching was implemented, but they were out there.  Anyone who was relying on the IE/Win way of doing things but used a DOCTYPE that triggered standards mode (say, for example, a HTML 4.01 Transitional DOCTYPE with URI) ended up with a “broken” page.  These problems were fixed by their authors, and that was that.  I remember a number of forum posts about how “IE6 broke my design”, and the posts that helped those authors address the problem.  In the case of old, unmaintained pages, they stayed broken, but odds are that next to nobody cared.  Regardless, it isn’t exactly a point of major concern on any radars I’ve seen in the last three years.

Furthermore, IE6 fixed a number of parsing bugs that existed in previous versions.  One of those was the bug on which Tantek Çelik’s “Box Model Hack” depended.  However, the parsing bug was fixed in both quirks and standards modes, so the BMH utterly failed to work in IE6 no matter what DOCTYPE you used.  That actually did break quite a few layouts, if I remember correctly.  I also remember the day I discovered that they’d fixed the parsing bug in both standards and quirks modes.  I swore at my monitor for a moment, and then actually thought about it.  I realized that the inconvenience of removing a few CSS hacks, or at worst changing to different hacks, was a pittance to pay in comparison to the advances IE6 had made in terms of increased standards support.

So I fixed a few style sheets, tossed a hack out of my mental toolbox, and got on with my life.  I contend that exactly the same thing would happen if a service pack were to add increased standards support to IE.

This is particularly true given that most of what IE should add would be, well, additions.  As in, things that IE doesn’t even try to support now, and so almost nobody uses them.  Think generated content.  Think attribute selectors.  Think fixed positioning.  These are all things that, if they were added to IE, would break almost no pages at all.  In fact, they’d make a small number of pages work better in IE.

For that incredibly small number of pages that would break (for whatever value of “break” you care to name) with improved standards support in IE6, I’m willing to bet that nearly all of them would get fixed right away.  Why?  Because they would be pages maintained by authors who actually want to use standards and care about doing things right.

Now, there is one area where I think the IE team would have to be careful about adding support, and that’s selectors.  A lot of hide-from-IE CSS hacks these days are based on its failure to support the child selector; in fact, I use these a few places in the S5 style sheets.  It is possible that adding support for child selectors to IE6 would be more harmful than beneficial.  I say it’s possible because I don’t know.  Nobody does—but Microsoft of all organizations has the ability to find out, and to act accordingly.  They have the funding, the personnel, the skills, and the customer base.  As Tristan said:

In its short, 2 1/2 year life, the Netscape Evangelism team helped  literally thousands of authors and administrators of web sites around the  world to improve their support for the W3C DOM and CSS Standards. If such a  small group with limited resources can help change the web, imagine what  Microsoft could do with its resources if it only tried.

Indeed.

Granted, the net stands still for no one, not even Microsoft.  There have already been, and continue to be, efforts to graft better standards support onto IE despite itself:  projects like PNG transparency fixes, whatever:hover, and IE7 take Microsoft’s proprietary behaviors and use them to make it easier to use open standards.  (I adore the poetry of that.)  The people behind those projects are already doing what Microsoft is apparently afraid to do, and they demonstrate why improving standards does not mean breaking the Web.

There’s one other point to consider.  If IE/Win improved its standards support in any meaningful way, believe me when I say that the news would be shouted from the Web site of every standards advocate in the known universe.  Nobody responsible for standards-oriented pages could avoid hearing about it.  Any problems would be quickly explained, and adjustments made.  Life would not only go on, but be better for developers and designers.

To sum up: the “more standards will break stuff” argument just doesn’t fly any more.  Microsoft can figure out what to do that won’t break pages, and there’s a ton of things that are new-to-IE, the implementation of which will no more break pages than did the image toolbar.  In cases that might cause breakage, Microsoft can determine—with community help, if they were to ask for it—how to minimize breakage while maximizing benefit.  To claim that possible Web page breakage prevents Microsoft from increasing standards support makes about as much sense as to claim that possible program breakage prevents them from ever changing or improving their operating system.

Despite this, I don’t have  much hope that we’ll see any improvements before Longhorn debuts.  I think that’s a shame, because I remember when the IE team was gung-ho about standards.  There were a number of very smart people who understood why standards were important, and were committed to doing their best to support standards in IE—not just on the Macintosh, but for Windows as well.

I do hope for Microsoft’s sake that those days return.  Because the Web continues to move, and if they just stand there promising that everything will be better in Longhorn, they may well find themselves left behind.


Preparing For SES Chicago

Published 20 years, 1 day past

Some of you may recall that a while back, I let my mouth run sarcastic in the direction of some SEO experts, a topic conference, and by implication an entire industry… and ended up publicly apologizing for same, when it turned out I’d been, if not libelous, then at the very least grossly unfair.  In the process, Danny Sullivan of Search Engine Watch, the guy who organized the conference I was maligning, floated the idea that I might come speak at one of their conferences.  I indicated that I’d be interested.

As a result, I’ll be appearing on a panel at SES Chicago.  The other two panel members are, as it happens, the very same people I bad-mouthed back in August.  So that ought to be interesting.  More information, and links, are available on the Events page at Complex Spiral Consulting.

So what standards-centric question(s) do you think I should ask of these SEO experts?  The one that’s top of my list is: “Exactly what effect, if any, does semantic markup have on search engine rankings?”  A related question: “How does content ordering affect search engine indexing?”  I’m sure there are others, and I’m happy to be a conduit for asking them of people who should know, and getting the answers back to you.  So ask away—and be polite, please.  I’m not going to be talking to comment spammers, but people who learn the ins and outs of search engine behaviors to help clients get wider exposure of their content.  They’re not too dissimilar from those of us who learn the ins and outs of browser behavior to help clients get their content online in the first place.  I failed to respect that before, and won’t make the same mistake again.


Simplicity Where It Counts

Published 20 years, 1 day past

Adam Bosworth recently gave a talk about simplicity in technology, and how it’s far more important to be simple and sloppy than complex and pure.  For the most part I agree with him; my first draft of XFN and FOAF was, basically, “FOAF is complicated.  XFN isn’t.”  While that was a flip way to amuse the reviewers, it also summarizes the core reason I took to XFN when Tantek and Matt explained it to me.  Making things easy is always preferable to making them hard.  As RFC 1925 so correctly states, “In protocol design, perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.”

There are two places where I’d like to take issue with Adam, however.  (And I hope it’s not presumptuous of me to call him “Adam” instead of “Mr. Bosworth”.)

The first is the idea that the Web should never be more complicated than plain old HTML.  Adam says: “I very much doubt that an HTML that had initially shipped as a clean layered set of content (XML, Layout rules – XSLT, and Formatting- CSS) would have had anything like the explosive uptake.”  This is absolutely common sense.  I think it’s also common sense that any truly popular and useful system starts out basic—’primitive’, if you like—and grows in complexity.  The best such systems hide the complexity from the end user.  Automobiles have gone from a few very basic controls (steering, acceleration, braking) to the mobile gadget factories we pilot today.  I still don’t have to know how the engine works to drive one.

So I absolutely agree that HTML caught on because it was simple.  It had to be, because there was nothing to shield us from the system’s innards.  When the popular Web was getting started, I did my best to promote its use by writing a trilogy of HTML tutorials (1, 2, 3).  My entire goal there was to make it easier for anyone to publish their own content.  Want to publish your grandmother’s Apple Pan Dowdy recipe?  More pictures of pets?  Bring ’em on!  They way I figured, if everyone published what they knew, the best information would slowly rise to the top by virtue of gathering more links.  Years later, Google built an empire around the concept of the best information being the most heavily linked.  Ah well, another fortune lost.

Even then, I did take the time to teach well-formed markup because I knew that malformed markup was likely to cause trouble.  This wasn’t an elitist thing, or some sort of far-reaching clairvoyance: at the time, it was possible to break browsers with incorrect nesting of inline elements.  So it only made sense to tell readers, “Hey, if you nest elements, make sure they actually nest instead of just closing them at random.  Otherwise your page might not get displayed.”  Eventually, browsers got more tolerant of sloppy markup, and those kinds of warnings weren’t really needed any longer.

So anyway, here we are ten-plus years later, and we’re still arguing about table layout and XHTML being lower-case and blah blah blah.  In the first place, table-driven layout isn’t simple.  It’s flexible and sloppy, but not simple.  The vast majority of table-layout authors have never touched a tag in their lives.  They’ve just told some tool “do this”, and it did it.  They don’t care how.  They shouldn’t have to care how.  So whether the tool generates 50KB of table-and-spacer markup, or 15KB of semantic markup with another 5KB of CSS, is wholly irrelevant to the user.  As it should be.

It is, however, relevant to those of us who ply the back end of the Web.  I’m not going to go over the arguments now; you probably know them, and know how you feel.  But my feeling has always been that once word processors stopped fighting over file formats, and got on to fighting over ease of use and features while all reading the same formats, that’s when word processing software took off.  I feel the same about the Web.

Now, I’m kind of a fan of CSS.  Have been for a while.  I like it because it lets the design (largely) happen away from the markup, where changes are easier, and it allows all kinds of stuff that HTML layout never managed.  (Two examples right off the top of my head: letter-spacing and border-style.)  I also like JavaScript and the DOM—not because they’re pure, but because they do what they need to do.  Inspired by the work of others, I put all those pieces together recently and created a lightweight slide show system.  It isn’t perfect, and because there are DOM actions it doesn’t always tolerate sloppiness, but it’s pretty simple.  Once editors exist to let people just point, click, and type slides (and I suspect that such editors may exist in the relatively near future), it will be even easier.  And the underlying format will not matter to 95% of its users.  Nor should it.

Anyway, this brings me round to the other thing I wanted to talk about.  Early in the talk, Adam says:

…in one of the unintended ironies of software history, HTML was intended to be used as a way to provide a truly malleable plastic layout language which never would be bound by 2 dimensional limitations, ironic because hordes of CSS fanatics have been trying to bind it with straight jackets ever since, bad mouthing tables and generations of tools have been layering pixel precise 2 dimensional layout on top of it.

First off, I think that Tim Berners-Lee might have a slightly different perspective on what HTML was intended to do, but let’s skip that.  The part I don’t get is the perception that “CSS fanatics have been trying to bind” HTML.  Personally, I’ve been trying to free Web design from the limitations it’s long experienced.  I’ve been working in that direction for years now.  I won’t argue that the job is finished: far from it.  But a big reason I’ve long been an advocate of using CSS is that it loosens the straightjacket, not tightens it.  The work I did within the context of the CSS Working Group was intended to loosen the bonds even further.

Maybe that means I don’t meet Adam’s definition of a “CSS fanatic”, I don’t know.  Maybe I’m not one of the “hordes” of jackbooted geeks, seeking to impose my tyrranical notions on the unwashed.  Either way, this does bring me to the question I want to ask:  where does this perception come from, that CSS is promoted a way to make the Web harder and HTML more difficult?  I’m not trying to belittle the idea by asking; I’m genuinely curious, and a little dismayed.  I know I’ve worked very hard to clearly describe what’s good and bad about CSS, just as I have about table-based design.  I’ve put a lot of effort into helping people who want to learn CSS do so, and explaining to those who ask why I think using CSS is a good idea.  Most of the other CSS advocates I know have done the same.  What is it that we did or didn’t do that got across this idea of inflexibility, or intolerance, or just plain elitism?

Because when you get right down to it, I’m still all in favor of people putting their stuff up for the world to see.  These days, I’m also in favor of making it easier for everyone else to see that stuff, and for tools to collect and analyze that stuff.  Standards make that easier—CSS makes that easier, although not as a standalone savior, but as part of a mosaic.  If someone as well-regarded and experienced as Adam Bosworth doesn’t see that, then I can’t help but feel there’s been a failure to communicate.


S5 Validity

Published 20 years, 4 weeks past

Over the past few days, I’ve gotten a few complaints about S5 breaking in one browser or another—IE6 and Safari got the most mentions, but there were others.  As an example, there was a report that the slide show would just stop working after a certain number of slides.  In every case I’ve seen so far, these problems have been caused by invalid XHTML.

The most common validation problem I expect people to run into is with the structuring of lists.  For example, suppose you want two levels of lists on a slide.  You do it like this:

<ul>
<li>point one</li>
<li>point two
   <ul>
   <li>subpoint one</li>
   <li>subpoint two</li>
   </ul>
</li>
<li>point three</li>
<li>point four</li>
</ul>

Notice how the nested list is inside the li element?  That’s correct.  You should never put nested lists between list items on the ‘outer’ list, even though a lot of people have made that a habit.  The only element that can be a child of a ul (or an ol) element is an li.  That’s it.  Anything that needs to be ‘nested’ goes inside one of the list items.

Alternatively, you can put structures after the list, if that’s what you want.  As an example:

<ul>
<li>point one</li>
<li>point two</li>
<li>point three</li>
</ul>
<pre>
...code sample...
</pre>

Nothing wrong with that, as long as you keep the side content inside the <div class="slide">...</div> element.  Or you could put your pre inside the last list item.  It’s really up to you.

Remember that S5 stands for “simple standards-based slide show system”.  That’s not just marketing: the CSS and scripts pretty much depend on valid markup structures.  If the markup is invalid, it will very likely lead to confusion and unexpected results.  In other words, violate the standards and they’ll violate your slide show.  There’s a certain poetic symmetry in that, I think.

(And yes, I do know that as of posting, this entry doesn’t validate.  Believe me, the irony is not at all lost on me.  This happened because I haven’t gotten around to fixing WordPress so it strips HTML before inserting the entry title into the title element.  I ranted about the problem a while back… and it will eventually get fixed.  Possibly when I upgrade to the next version of WP.)


S5 1.0

Published 20 years, 1 month past

Okay, folks, here it is: S5 version 1.0.  In addition to a few minor tweaks to make the system more robust, I’ve created a couple of themes to add to the ones Martin Hense created.  I have links to them all on the new S5 Themes page.  Share and enjoy.

One of the more notable tweaks is that the URL of slides.css is now read by the JS at document load, and used from then on.  Thus, you can point to a slides.css that’s in a different location than the rest of the UI files, if you so desire.  Another change is that the introductory slide show now contains some images, including one that maps out the file structure.  These were added so that new users would have some inkling of how to put images into a slide show.  There may of course be other ways of accomplishing the same task.

There were a number of good ideas and code contributions, but they were also too last-minute to be included in v1.0.  I’ll add them to a “to do” list for v1.1.  As to the suggestion that the project be moved to SourceForge, it’s certainly an idea I’ll explore further.  I don’t know enough about SF to know how such an arrangement would work; I only ever go to SF to download stuff, and find the site to be somewhat annoying in that it’s never immediately clear to me what I’m supposed to download, not to mention finding detailed information about whatever I’m downloading seems much harder than it should be.  For now, I’ll keep S5 local to meyerweb.  It can always be migrated over to SF later on, if that turns out to be a good idea.

There are still limitations in the system.  For example, if the slide show assumes 1024×768 and your window is 800×600, then you’re likely to have content cut off by the footer.  So edit the CSS to assume 800×600 (the easiest step is to lower the font-size of the body element).  Or set things up so that scrollbars will appear on the slide content if it overflows the slide.  You get the general idea, I think: this is very much a DIY-type system, at least for now.  The JS works, and the core styles help it work, in a cross-browser fashion.  Anything after that is up to the theme author.

There may one day be routines that automatically scale text, or dynamically break up slides, in order to solve the clipping problem.  There may also be features that let you trigger animations by hitting “next”, let you easily integrate SVG content, allow the use of the navigation menu in Opera Show, permit dynamic theme selection, and so on and so on.  For now, we have a good standards-based slide show system, one that should suffice for a great many people.

And my deepest thanks to all those people who have contributed, directly or otherwise, to S5, including those who made suggestions I haven’t yet folded into the system.  You have made, and will continue to make in the future, S5 better than I ever could have made it on my own.


Good Show

Published 20 years, 1 month past

Everyone’s been pointing to the newly restored Mount Saint Helens webcam page, mostly because it’s come back online just as geologic events such as earthquake swarms are occurring in the area.

I’m pointing to it for a different reason.  To see what I mean, view source on the webcam page, or hover your mouse over the webcam image in a modern browser.

Now that’s good alt text.  The title text isn’t bad, either.


Standards Savings

Published 20 years, 2 months past

Yesterday morning, I saw via somebody’s feed (most likely either Matt or Simon) that Rakesh Pai has published a piece called “The Economics of XHTML”, in which he explores and summarizes many points in favor of moving to XHTML.  As he says, XHTML and semantic HTML are basically the same thing, so when you read the article you should prepend the words “old-style” to the term HTML and “semantic HTML or” to the term XHTML.  Thus, the following paragraph from his article:

HTML files are rather complex. They have so much irrelevant information, it becomes difficult to manage them. XHTML on the other hand, if well planned, will be much easier to manage in the long run due to sheer simplicity of the files.

…should be read (emphasis indicates my modifications):

Old-style HTML files are rather complex. They have so much irrelevant information, it becomes difficult to manage them. Semantic HTML or XHTML on the other hand, if well planned, will be much easier to manage in the long run due to sheer simplicity of the files.

I admit I’m reading a bit into the text, but I think my reading is supported by Rakesh’s statement about the basic equivalence of semantic HTML and XHTML.

Overall, it’s a very good summary of the business reasons to shift to standards-oriented design.  It bothers me not at all that he left out a discussion of CSS in the article, since the focus was purely on the benefits to be gained from improving your markup.  It’s often useful to concentrate on small pieces in detail, so that by the time you’ve looked at the various pieces you have a much better understanding of the overall picture.  I would like to talk about an under-recognized aspect of his first point, which was (again, emphasis indicates an insertion on my part):

Semantic HTML or XHTML will give a definite reduction in amount of space used on the server, which translates to money saved. Large sites will also benefit from the bandwidth savings, though this might be insignificant for smaller sites.

It’s true that smaller sites probably won’t save a lot of money on bandwidth.  This is especially true given that many small businesses pay a flat rate each month so long as their bandwidth is below a certain level.  So sites in that range will likely not see significant reductions in expenses.

What they will see, though, is a faster site.  Okay, actually, that’s what their users will see, but that’s the entire point.  Suppose I said I could sell you a product that would make your Web site twice as fast as it is today.  How much would that be worth?  Maybe not much for a fandom site, but even for a small business trying to sell its products online and disitinguish itself from competitors, a 2x speed booster would probably be worth buying.

The thing is that you don’t have to make a purchase to get this boost (unless you decide to hire a consultant like me to help you migrate to standards-oriented design).  There’s no product to buy.  Along with all the other benefits Rakesh describes—accessibility, ease of maintenance, and so on—a faster Web site is part of the standards package.

How so?  Far and away, the #1 factor in request-to-render time is the raw number of bytes you’re shipping to the user.  If you’re sending a dialup user a 60KB page, then even assuming an uninterrupted, uncongested connection it will take about 15 seconds for the page to render, measured from the time the user requests it.  If you send a 30KB page, it will take half that time.

I’m not just saying that because it sounds true: while I was at Netscape, Bob Clary and I did experiments on the effects of standards-oriented design on request-to-render time.  We saw basically no speed improvement from using standards-oriented design beyond the reduction in page weight—but that was a substantial effect.  Thus, the actual markup you use is basically irrelevant in this arena.  The whole page could be one graphic scan of your brochure, or it could be a press release.  Either way, the more bytes you send over the wire, the slower the site will seem.  True, there are other considerations, like server load, network congestion, and so on.  All of those factors will be eased, and speed improved, if there are fewer bytes per page request.  It’s honestly that simple.

As I’ve been telling clients, this is what makes standards a big economic win: standards-oriented designs are nearly always much smaller than old-style designs.  Usually they’re half the size of old-school pages, although that reduction can vary; Microsoft’s home page size dropped by about two-thirds in terms of the markup weight alone.  With Microsoft’s traffic, they stand to save a boatload of money on that reduction.  They also have a site that’s faster, that feels less bloated, that leaves more of a positive impression on the user.  That’s an important consideration for any company, regardless of its size.  It’s a consideration that’s harder to quantify than reduced costs, of course, but not one that can be ignored.  As Richard Rutter shared, when Multimap went to a standards-oriented design, they didn’t see as much of a drop in bandwidth consumption as they’d expected, although there was a reduction.  What they saw was a dramatic upswing in page views, which meant increased revenue for the firm; in other words, they were able to serve more customers and increase revenue while using less bandwidth than before.

That’s what they call increased efficiency, and it’s a competitive advantage in any arena.


Browse the Archive

Earlier Entries

Later Entries