Posts in the CSS Category

Linear Gradient Keywords

Published 12 years, 9 months past

Linear gradients in CSS can lead to all kinds of wacky, wacky results — some of them, it sometimes seems, in the syntax itself.

Let me note right up front that some of what I’m talking about here isn’t widely deployed yet, nor for that matter even truly set in stone.  Close, maybe, but there could still be changes.  Even if nothing actually does change, this isn’t a “news you can use RIGHT NOW” article.  Like so much of what I produce, it’s more of a stroll through a small corner of CSS, just to see what we might see.

For all their apparent complexity, linear gradients are pretty simple.  You define a direction along which the gradient progresses, and then list as many color stops as you like.  In doing so, you describe an image with text, sort of like SVG does.  That’s an important point to keep in mind:  a linear (or radial) gradient is an image, just as much as any GIF or PNG.  That means, among other things, that you can mix raster images and gradient images in the background of an element using the multiple background syntax.

But back to gradients.  Here’s a very simple gradient image:

linear-gradient(45deg, red, blue)

The 45deg defines the gradient line, which is the line that defines how the gradient progresses.  The gradient line always passes through the center of the background area, and its specific direction is declared by you, the author.  In this case, it’s been declared to point toward the 45-degree angle.  red and blue are the color stops.  Since the colors don’t have stop distances defined, the distances are assumed to be 0% and 100%, respectively, which means you get a gradient blend from red to blue that progresses along the gradient line.

You can create hard stops, too:

linear-gradient(45deg, green 50%, lightblue 50%)
Figure 1

That gets you the result shown in Figure 1, to which I’ve added (in Photoshop) an arrow showing the direction of the gradient line, as well as the centerpoint of the background area.  Each individual “stripe” in the gradient is perpendicular to the gradient line; that’s why the boundary between the two colors at the 50% point is perpendicular to the gradient line.  This perpendicularness is always the case.

Now, degrees are cool and all (and they’ll be changing from math angles to compass angles in order to harmonize with animations, but that’s a subject for another time), but you can also use directional keywords.  Two different kinds, as it happens.

The first way to use keywords is to just declare a direction, mixing and matching from the terms top, bottom, right, and left.  The funky part is that in this case, you’re declaring the direction the gradient line comes from, not that toward which it’s going; that is, you specify its origin instead of its destination.  So if you want your gradient to progress from the bottom left corner to the top right corner, you actually say bottom left:

linear-gradient(bottom left, green 50%, lightblue 50%)
Figure 2

But bottom left does not equal 45deg, unless the background area is exactly square.  If the area is not square, then the gradient line goes from one corner to another, with the boundary lines perpendicular to that, as shown in Figure 2.  Again, I added a gradient line and centerpoint in Photoshop for clarity.

Of course, this means that if the background area resizes in either direction, then the angle of the gradient line will also change.  Make the element taller or more narrow, and the line will rotate counter-clockwise (UK: anti-clockwise); go shorter or wider and it will rotate clockwise (UK: clockwise).  This might well be exactly what you want.  It’s certainly different than an degree angle value, which will never rotate due to changes in the background’s size.

The second way to use keywords looks similar, but has quite different results.  You use the same top/bottom/left/right terms, but in addition you prepend the to keyword, like so:

linear-gradient(to top right, green 50%, lightblue 50%)
Figure 3

In this case, it’s clear that you’re declaring the gradient line’s destination and not its origin; after all, you’re saying to top right.  However, when you do it this way, you are not specifying the top right corner of the background area.  You’re specifying a general topward-and-rightward direction.  You can see the result of the previous sample in Figure 3; once more, Photoshop was used to add the gradient line.

Notice the hard-stop boundary line.  It’s actually stretching from top left to bottom right (neither of which is top right).  That’s because with the to keyword in front, you’re triggering what’s been referred to as “magic corners” behavior.  When you do this, no matter how the background area is (re)sized, that boundary line will always stretch from top left to bottom right.  Those are the magic corners.  The gradient line thus doesn’t point into the top right corner, unless the background area is perfectly square — it points into the top right quadrant (quarter) of the background area.  Apparently the term “magic quadrants” was not considered better than “magic corners”.

The effect of changing the background area’s size is the same as before; decreasing the height or increasing the width of the background area will deflect the gradient line clockwise, and the opposite change to either axis will produce the opposite deflection.  The only difference is the starting condition.

Beyond all this, if you want to use keywords that always point toward a corner, as in Figure 2 except specifying the destination instead of the origin, that doesn’t appear to be an option.  Similarly, neither can you declare an origin quadrant.  Having the gradient line always traverse from corner to corner means declaring the origin of the gradient line (Figure 2).  If you want the “magic corners” effect where the 50% color-stop line points from corner to corner, with the gradient line’s destination flexible, then you declare a destination quadrant (Figure 3).

As for actual support:  as of this writing, only Firefox and Opera support “magic corners”.  All current browsers — in Explorer’s case, that means IE10 — support angles and non-magic keywords, which means Opera and Firefox support both keyword behaviors.  Nobody has yet switched from math angles to compass angles.  (I used 45deg very intentionally, as it’s the same direction in either system.)

That’s the state of things with linear gradients right now.  I’m interested to know what you think of the various keyword patterns and behaviors — I know I had some initial trouble grasping them, and having rather different effects for the two patterns seems like it will be confusing.  What say you?


Whitespace in CSS Calculations

Published 12 years, 10 months past

I’ve been messing around with native calculated values in CSS, and there’s something a bit surprising buried in the value format.  To quote the CSS3 Values and Units specification:

Note that the grammar requires spaces around binary ‘+’ and ‘-’ operators. The ‘*’ and ‘/’ operators do not require spaces.

In other words, two out of four calculation operators require whitespace around them, and for the other two it doesn’t matter.  Nothing like consistency, eh?

This is why you see examples like this:

width: calc(100%/3 - 2*1em - 2*1px);

That’s actually the minimum number of characters you need to write that particular expression, so far as I can tell.  Given the grammar requirements, you could legitimately rewrite that example like so:

width: calc(100% / 3 - 2 * 1em - 2 * 1px);

…but not like so:

width: calc(100%/3-2*1em-2*1px);

The removal of the spaces around the ‘-’ operators means the whole value is invalid, and will be discarded outright by browsers.

We can of course say that this last example is kind of unreadable and so it’s better to have the spaces in there, but the part that trips me up is the general inconsistency in whitespace requirements.  There are apparently very good reasons, or at least very historical reasons, why the spaces around ‘+’ and ‘-’ are necessary.  In which case, I personally would have required spaces around all the operators, just to keep things consistent.  But maybe that’s just me.

Regardless, this is something to keep in mind as we move forward into an era of wider browser support for calc().

Oh, and by the way, the specification also says:

The ‘calc()’ expression represents the result of the mathematical calculation it contains, using standard operator precedence rules.

Unfortunately, the specification doesn’t seem to actually define these “standard operator precedence rules”.  This makes for some interesting ambiguities because, as with most standards, there are so many to choose from.  For example, 3em / 2 * 3 could be “three em divided by two, with the result multiplied by three” (thus 4.5em) or “three em divided by six” (thus 0.5em), depending on whether you privilege multipliers over dividers or vice versa.

I’ve looked around the Values and Units specification but haven’t found any text defining the actual rules of precedence, so I’m going to assume US rules (which would yield 4.5em) unless proven otherwise.  Initial testing seems to bear this out, but not every browser line supports these sorts of expressions yet, so there’s still plenty of time for them to introduce inconsistencies.  If you want to be clear about how you want your operators to be resolved, use parentheses: they trump all.  If you want to be sure 3em / 2 * 3 resolves to 4.5em, then write it (3em / 2) * 3, or (3em/2)*3 if you care to use inconsistent whitespacing.


Negative Proximity

Published 12 years, 11 months past

There’s a subtle aspect of CSS descendant selectors that most people won’t have noticed because it rarely comes up: selectors have no notion of element proximity.  Here’s the classic demonstration of this principle:

body h1 {color: red;}
html h1 {color: green;}

Given those styles, all h1 elements will be green, not red.  That’s because the selectors have equal specificity, so the last one wins.  The fact that the body element is “closer to” the h1 than the html element in the document tree is irrelevant.  CSS has no mechanism for measuring proximity within the tree, and if I had to place a bet on the topic I’d bet that it never will.

I bring this up because it can get you into trouble when you’re using the negation pseudo-class.  Consider:

div:not(.one) p {font-weight: bold;}
div.one p {font-weight: normal;}

<div class="one">
  <div class="two">
    <p>Hi there!</p>
  </div>
</div>

Given these styles, the paragraph will not be boldfaced.  That’s because both rules match, so the last one wins.  The paragraph will be normal-weight.

“AHA!” you cry.  “But the first rule has a higher specificity, so it wins regardless of the order they’re written in!”  You’d think so, wouldn’t you?  But it turns out that the negation pseudo-class isn’t counted as a pseudo-class.  It, like the univseral selector, doesn’t contribute to specificity at all:

Selectors inside the negation pseudo-class are counted like any other, but the negation itself does not count as a pseudo-class.

 — Selectors Level 3, section 9: Calculating a selector’s specificity

If you swapped the order of the rules, you’d get a boldfaced paragraph thanks to the “all-other-things-being-equal-the-last-rule-wins” step in the cascade.  However, that wouldn’t keep you from getting a red-on-red paragraph in this case:

div:not(.one) p {color: red;}
div.one p {background: red;}

<div class="one">
  <div class="two">
    <p>Hi there!</p>
  </div>
</div>

The paragraph is a child of a div that doesn’t have a class of one, but it’s also descended from a div that has a class of one.  Both rules apply.

(Thanks to Stephanie Hobson for first bringing this to my attention.)


The Web Ahead, Episode #18: Me!

Published 12 years, 11 months past

Last Thursday, I had the rare honor and privilege of chatting with Jen Simmons as a guest on The Web Ahead .  (I’ve also chatted with Jen in real life.  That’s even awesomer!)  As is my wont, I completely abused that privilege by chatting for two hours — making it the second-longest episode of The Web Ahead to date — about the history of the web and CSS, what’s coming up that jazzes me the most, and all kinds of stuff.  I even revealed, toward the end of the conversation, the big-picture projects I dearly wish I had time to work on.

The finished product was published last Friday morning.  I know it’s a bit of a lengthy beast, but if you’re at all interested about how we got to where we are with CSS, you might want to give this a listen:  The Web Ahead, Episode #18.  Available for all your finer digital audio players via embedded Flash player, iTunes, RSS, and MP3 download.

My deepest thanks to Jen for inviting me to be part of the show!


“The Vendor Prefix Predicament” at ALA

Published 13 years, 6 days past

Published this morning in A List Apart #344: an interview I conducted with Tantek Çelik, web standards lead at Mozilla, on the subject of Mozilla’s plan to honor -webkit- prefixes on some properties in their mobile browser.  Even better: Lea Verou’s Every Time You Call a Proprietary Feature ‘CSS3,’ a Kitten Dies.  Please — think of the kittens!

My hope is that the interview brings clarity to a situation that has suffered from a number of misconceptions.  I do not necessarily hope that you agree with Tantek, nor for that matter do I hope you disagree.  While I did press him on certain points, my goal for the interview was to provide him a chance to supply information, and insight into his position.  If that job was done, then the reader can fairly evaluate the claims and plans presented.  What conclusion they reach is, as ever, up to them.

We’ve learned a lot over the past 15-20 years, but I’m not convinced the lessons have settled in deeply enough.  At any rate, there are interesting times ahead.  If you care at all about the course we chart through them, be involved now.  Discuss.  Deliberate.  Make your own case, or support someone else’s case if they’ve captured your thoughts.  Debate with someone who has a different case to make.  Don’t just sit back and assume everything will work out — for while things usually do work out, they don’t always work out for the best.  Push for the best.

And fix your browser-specific sites already!


Unfixed

Published 13 years, 1 week past

Right in the middle of AEA Atlanta — which was awesome, I really must say — there were two announcements that stand to invalidate (or at least greatly alter) portions of the talk I delivered.  One, which I believe came out as I was on stage, was the publication of the latest draft of the CSS3 Positioned Layout Module.  We’ll see if it triggers change or not; I haven’t read it yet.

The other was the publication of the minutes of the CSS Working Group meeting in Paris, where it was revealed that several vendors are about to support the -webkit- vendor prefix in their own very non-WebKit browsers.  Thus, to pick but a single random example, Firefox would throw a drop shadow on a heading whose entire author CSS is h1 {-webkit-box-shadow: 2px 5px 3px gray;}.

As an author, it sounds good as long as you haven’t really thought about it very hard, or if perhaps you have a very weak sense of the history of web standards and browser development.  It fits right in with the recurring question, “Why are we screwing around with prefixes when vendors should just implement properties completely correctly, or not at all?”  Those idealized end-states always sound great, but years of evidence (and reams upon reams of bug-charting material) indicate it’s an unrealistic approach.

As a vendor, it may be the least bad choice available in an ever-competitive marketplace.  After all, if there were a few million sites that you could render as intended if only the authors used your prefix instead of just one, which would you rather: embark on a protracted, massive awareness campaign that would probably be contradicted to death by people with their own axes to grind; or just support the damn prefix and move on with life?

The practical upshot is that browsers “supporting alien CSS vendor prefixes”, as Craig Grannell put it, seriously cripples the whole concept of vendor prefixes.  It may well reduce them to outright pointlessness.  I am on record as being a fan of vendor prefixes, and furthermore as someone who advocated for the formalization of prefixing as a part of the specification-approval process.  Of course I still think I had good ideas, but those ideas are currently being sliced to death on the shoals of reality.  Fingers can point all they like, but in the end what matters is what happened, not what should have happened if only we’d been a little smarter, a little more angelic, whatever.

I’ve seen a proposal that vendors agree to only support other prefixes in cases where they are un-prefixing their own support.  To continue the previous example, that would mean that when Firefox starts supporting the bare box-shadow, they will also support -webkit-box-shadow (and, one presumes, -ms-box-shadow and -o-box-shadow and so on).  That would mitigate the worst of the damage, and it’s probably worth trying.  It could well buy us a few years.

Developers are also trying to help repair the damage before it’s too late.  Christian Heilmann has launched an effort to get GitHub-based projects updated to stop being WebKit-only, and Aarron Gustafson has published a UNIX command to find all your CSS files containing webkit along with a call to update anything that’s not cross-browser friendly.  Others are making similar calls and recommendations.  You could use PrefixFree as a quick stopgap while going through the effort of doing manual updates.  You could make sure your CSS pre-processor, if that’s how you swing, is set up to do auto-prefixing.

Non-WebKit vendors are in a corner, and we helped put them there.  If the proposed prefix change is going to be forestalled, we have to get them out.  Doing that will take a lot of time and effort and awareness and, above all, widespread interest in doing the right thing.

Thus my fairly deep pessimism.  I’d love to be proven wrong, but I have to assume the vendors will push ahead with this regardless.  It’s what we did at Netscape ten years ago, and almost certainly would have done despite any outcry.  I don’t mean to denigrate or undermine any of the efforts I mentioned before — they’re absolutely worth doing even if every non-WebKit browser starts supporting -webkit- properties next week.  If nothing else, it will serve as evidence of your commitment to professional craftsmanship.  The real question is: how many of your fellow developers come close to that level of commitment?

And I identify that as the real question because it’s the question vendors are asking — must ask — themselves, and the answer serves as the compass for their course.


CSS Modules Throughout History

Published 13 years, 4 months past

For very little reason other than I was curious to see what resulted, I’ve compiled a list of various CSS modules’ version histories, and then used CSS to turn it into a set of timelines.  It’s kind of a low-cost way to visualize the life cycle of and energy going into various CSS modules.

I’ll warn you up front that as of this writing the user interaction is not ideal, and in some places the presentation suffers from too much content overlap.  This happens in timelines where lots of drafts were released in a short period of time.  (In one case, two related drafts were released on the same day!)  I intend to clean up the presentation, but for the moment I’m still fiddling with ideas.  The obvious one is to rotate every other spec name by -45 degrees, but that looked kind of awful.  I suspect I’ll end up doing some sort of timestamp comparison and if they’re too close together, toss on a class that invokes a -45deg rotation.  Or maybe I’ll get fancier!

The interaction is a little tougher to improve, given what’s being done here, but I have a few ideas for making things, if not perfect, at least less twitchy.

I should also note that not every module is listed as I write this:  I intentionally left off modules whose last update was 2006 or earlier.  I may add them at the end, or put them into a separate set of timelines.  The historian in me definitely wants to see them included, but the shadow of a UX person who dwells somewhere in the furthest corners of my head wanted to avoid as much clutter as possible.  We’ll see which one wins.

Anyway, somewhat like the browser release timeline, which is probably going to freeze in the face of the rapid-versioning schemes that are all the rage these days, I had fun combining my love of the web and my love of history.  I should do it more often, really.  The irony is that I don’t really have the time.


Un-fixing Fixed Elements with CSS Transforms

Published 13 years, 5 months past

In the course of experimenting with some new artistic scripts to follow up “Spinning the Web“, I ran across an interesting interaction between positioning and transforms.

Put simply: as per the Introduction of the latest CSS 2D Transforms draft, a transformed element creates a containing block for all its positioned descendants.  This occurs in the absence of any explicit positioning of the transformed element.

Let’s walk through that.  Say you have a document whose body contains nothing except a position: static (normal-flow) div that contains some absolutely-positioned descendants.  The containing block for those positioned elements will be the root element.  Nothing unusual or unexpected there.

But then you decide to declare div {transform: rotate(10deg);}.  (Or even 0deg, which will have the same result.)  Now the div is the containing block for the absolutely-positioned elements that descend from it.  It’s as though transforming an element force-adds position: relative.  The positioned elements will rotate with their ancestor and be placed according to its containing block — not that of the root element.

Okay, so that’s a little unusual but perhaps not unexpected.  I could make arguments both ways, and some of the arguments could get pretty complex.  To pick one example, if the transformed element didn’t generate a containing block, how would translate transforms be handled?

Either way, here’s where things got really troublesome for me:  a transformed element creates a containing block even for descendants that have been set to position: fixed.  In other words, the containing block for a fixed-position descendant of a transformed element is the transformed element, not the viewport.  Furthermore, if the transformed element is in the normal flow, it will scroll with the document and the fixed-position descendants will scroll with it. You can see my test case, where the red and blue boxes would overlap each other and stay fixed in place, except the second green div has been rotated.

Obviously this makes the fixed-position elements something less than fixed-position.  In effect, not only does the transformed element act as if it’s been force-assigned position: relative, the fixed descendants behave as if they’ve been force-changed to position: absolute.

I find this not only unusual and unexpected, but also a wee bit unsettling.  Personally, I think it goes too far.  Fixed-position elements should be fixed to the viewport, regardless of the transformation of their ancestors.  Of course, if you agree with my thinking there, realize that opens a whole new debate about how, or even whether, transforms of ancestors should be carried to fixed-position descendants.

I have my own intuitions about that, but this is definitely territory where intuitions are to be treated with caution.  There are a lot of interacting behaviors no matter what you do, and no matter what you do someone’s going to find the results baffling in some way or other.

But since I do have intuitions, here’s what they are:  transformed elements in the normal flow or floated do not establish containing blocks for absolutely- and fixed-position descendants.  This means that any transforms you apply to the transformed element are not applied to the positioned descendants, because transforms don’t inherit.

What if you want a normal-flow transformed element to be a containing block?  Use position: relative, same as you would if there were no transform.  And if you want the transforms to be passed on to the descendants even though no containing block is established?  The inherit value would work in some cases, though not all.  That’s where my approach runs aground, and I’m not yet sure how to get it back to sea.

Okay, so that’s what I think.  What do you think?


Browse the Archive

Earlier Entries

Later Entries