Posts in the Commentary Category

Ventura Vexations

Published 11 months, 2 weeks past

I’ve been a bit over a month now on my new 14” MacBook Pro, and I have complaints.  Not about the hardware, which is solid yet lightweight, super-quiet yet incredibly fast and powerful, long-lived on battery, and decent enough under the fingertips.  Plus, all the keyboard keys Just Work™, unlike the MBP it replaced!  So that’s nice.

No, my complaints are entirely about the user environment.  At first I thought this was because I skipped directly from OS X 10.14 to macOS 13, and simply wasn’t used to How The Kids Do Things These Days®, but apparently I would’ve felt the same even if I’d kept current with OS updates.  So I’m going to gripe here in hopes someone who knows more than me will have recommendations to ameliorate my annoyance.

DragThing Dismay

This isn’t on Apple, but still, it’s a huge loss for me.  I know I already complained about the lack of DragThing, but I really, really do miss what it did for me.  You never know what you’ve got ’til it’s gone, right?  But let me be clear about exactly what it did for me, which so far as I can tell no macOS application does, nor does macOS itself.

The way I used DragThing was to have a long shelf down the right side of my monitor containing small-but-recognizable icons representing my most-used folders (home directory, Downloads, Documents, Applications, a few other folders) and a number of applications.  It stayed there all the time, and the icons were always there whether or not the application was running.

When I launched, say, Firefox, then there would be a little indicator next to its application icon in DragThing to indicate it was running.  When I quit Firefox, the indicator went away but the Firefox icon stayed.  And also, if I launched an application that wasn’t in the DragThing shelf, it did not add an icon for that application to the shelf. (I used the Dock at the bottom of the screen to show me that.)

There are super-powered application switchers available for macOS, but as far as I’ve seen, they only list the applications actually running.  Launch an application, its icon is added.  Quit an application, its icon disappears.  None of these switchers let me keep persistent static one-click shortcuts to launch a variety of applications and open commonly-used folders.

Dock Folder Disgruntlement

Now I’m on to macOS itself.  Given the previous problem, the Dock is the only thing available to me, and I have gripes about it.  One of the bigger ones is rooted in folders kept on the Dock, to the right of the bar that divides them from the application icons.  When I click on them, I get a popup (wince) or a Stack (shudder) instead of them just opening the target folder in the Finder.

In the Before Times, I could create an alias to the folder and drop that in the Dock, the icon in the Dock would look like the target folder, and clicking on the alias opened the folder’s window.  If I do that now, the click-to-open part works, but the aliases all look like blank text documents with tiny arrows.  What the hell?

If I instead add actual folders (not aliases) to the Dock, holding down ⌥⌘ (option-command) when I click them does exactly what I want.  Only, I don’t want to have to hold down modifier keys, especially when using the trackpad.  I’ve mostly adapted to the key combo, but even on desktop I still sometimes click a folder and blink in irritation at the popup thingy for a second before remembering that things are stupider now.

Translucency Tribulation

The other problem with the Dock is that mine is too opaque.  That’s because the nearly-transparent Finder menu bar was really not doing it for me, so acting on a helpful tip, I went and checked the “Reduce Transparency” option in the Accessibility settings.  That fixed the menu bar nicely, but it also made the Dock opaque, which I didn’t actually want.  I can pretty easily live with it, but I do wish I could make just the menu bar opaque (without having to resort to desktop wallpaper hacks, which I suspect do not do well with changes of display resolution).

Shortcut Stupidity

Seriously, Apple, what the hell.

And while I’m on the subject of the menu bar: no matter the application or even the Finder itself, dropdown menus from the menu bar render the actions you can do in black and the actions you can’t do in washed-out gray.  Cool.  But also, all the keyboard shortcuts are now a washed-out gray, which I keep instinctively thinking means they’ve been disabled or something.  They’re also a lot more difficult for my older eyes to pick out, and I have to flick my eyes back and forth to make sure a given keyboard shortcut corresponds to a thing I actually can do.  Seriously, Apple, what the hell?

Trash Can Troubles

I used to have the Trash can on the desktop, down in the lower right corner, and now I guess I can’t.  I vaguely recall this is something DragThing made possible, so maybe that’s another reason to gripe about the lack of it, but it’s still bananas to me that the Trash can is not there by default.  I understand that I may be very old.

Preview Problems

On my old machine, Preview was probably the most rock-solid application on there.  On the new machine, Preview occasionally hangs on closing heavily-commented PDFs when I choose not to save changes.  I can force-quit it and so far haven’t experienced any data corruption, but it’s still annoying.


Those are the things that have stood out the most to me about Ventura.  How about you?  What bothers you about your operating system (whichever one that is) and how would you like to see it fixed?

Oh, and I’ll follow this up soon with a post about what I like in Ventura, because it’s not all frowns and grumbles.


No, Apple Did Not Crowdfund :focus-visible in Safari

Published 2 years, 1 month past

It’s not every week the release notes for a preview build of a web browser ignite Yet Another Twitter Teacup Storm (YATTS™), but that’s what happened when Safari Technology Preview 138 dropped late last week. At least, it’s what happened in the Twitter Teacups I tend to sip.

Just in case you missed it, here’s the summary:

  1. The WebKit team released Safari Technology Preview 138, and the release notes for same.
  2. The “CSS” section of the release notes started with a line saying:
    Enabled :focus-visible pseudo-class by default (r286783, r286776, r286775)
  3. A few people, including Jen Simmons, gave credit to Igalia for implementing :focus-visible by means of a crowdfunding project (more on that in a moment).
  4. KABOOM

I suppose I could be a bit more explicit in step 4, but I don’t really want to get into speculating on apparent motives and assumptions by others, because that’s not the point of this post. The point of this post is to clear up what seems to be a very common misunderstanding.

What I kept seeing people saying was something to the effect of, “Why the hell did Apple have to crowdfund this feature?” And that’s wrong in two ways:

  1. Apple doesn’t have to crowdfund anything, up to and including colonization of the Moon. (They might have to ask for a few bucks to do Venus or Mars.)
  2. Apple didn’t crowdfund :focus-visible.

This isn’t me splitting hairs, either. Nobody at Apple asked the crowd to fund anything. Nobody at Apple asked Igalia to crowdfund anything. They didn’t even ask Igalia to implement :focus-visible, and then Igalia decided to crowdfund the work. In fact, all of those assumptions get things almost exactly backwards — which is understandable! It’s what we expect from our experience of how the web has developed since at least the late 1990s. But here, something new happened.

So, let me summarize what happened using yet another ordered list:

  1. Igalia noticed they’d done a fair bit of work adding features to all the browser engines (e.g., CSS Grid), with each project supported by a single paying client, and thought, “Wait a minute, the web is a commons. Why are features being driven one client at a time?”
  2. Of its own volition, Igalia decided to experiment with the idea of letting the web community (the “crowd”) vote for implementation of a missing browser feature with their wallets (the “funding”). They called this ongoing experiment Open Prioritization, and launched it in 2019.
  3. There were six possible projects, chosen by Igalia through their own set of criteria, for the community to vote on by pledging monetary support:
    • CSS lab() colors in Firefox
    • :focus-visible in WebKit/Safari
    • HTML inert in WebKit/Safari
    • Selector list arguments for :not() in Chrome
    • CSS Containment support in WebKit/Safari
    • CSS d (SVG path) support in Firefox
  4. The winner was implementing :focus-visible in WebKit/Safari, and by “winner”, I mean that project got the most monetary commitment from the members of the community.
  5. Igalia matched the community contributions dollar for dollar, and moved forward with the work.
  6. The work was done, and submitted to the WebKit code base. (Along the way, inconsistencies and other problems were discovered, addressed, and fixes contributed to engines other than WebKit.)
  7. The WebKit team accepted Igalia’s contributions, and are now shipping them in a preview build of Safari for developers to test out.

In other words: the community (more precisely, a portion of it) voted on which feature was most needed, Igalia implemented it, and Apple accepted it. Apple’s role in this process came at the end, not the beginning.

And no, this is not the usual thing! It’s not supposed to be. Igalia is deeply committed to not just advancing the web, but to an unprecedented extent democratizing that advancement. It isn’t anything like a pure democratic effort, at least not yet, but these are early days and the initiative is structured to meet the current constraints of the environment (read: living under capitalism means coders gotta get paid).

But why is Igalia doing this? Time for another list! Just to switch things up, this one will be unordered:

  • Because the community should have more of a say in what gets prioritized in browsers. The community can be large collections of individuals, or it could be small collections of small companies, or a mix.
  • Because in every browser team, there’s always a priority list, and sometimes good features get pushed down that list for various reasons. It could be lack of expertise. It could be lack of time. It could be lack of interest. It could be interference by higher-ups. It doesn’t matter.
  • Because browser teams — not any one team, but the unfortunately small number of browser teams — are a bottleneck. No matter how much money the companies who employ those teams throw at them, they will always be a bottleneck, because resources are finite.

And this brings us to why I think “Wait, shouldn’t the $browser_name team have already done $feature_name by now? Why did an outside party have to do it?” is a little short-sighted. There will always be a $feature_name that the $browser_name team hasn’t done yet, for any value of $browser_name you care to posit. Today it could be WebKit; tomorrow, Chromium. In ten years, maybe there will be teams at Amazon and Huawei, making browser engines that compete for user share. Maybe not. Doesn’t actually matter, because however many or few engines there are, no matter what their priorities are, this problem will persist.

This is also why I’m not getting into Apple’s funding levels and priorities for WebKit and the web. Yes, there is much Apple-the-company can be criticized about, and personally, I am one of the biggest fans browser-engine diversity ever had, but that is a different conversation. Even if you could somehow wave a magic wand and open all platforms everywhere to engine diversity, and simultaneously cause a thousand browsers to bloom, we would still have the same basic problem. Open Prioritization would still need to exist.

For another piece of evidence on that point, look at the second Open Prioritization project: MathML-Core, whose goal is to bring full cross-browser support for the MathML Core specification to browsers, starting with Chrome (which needs the most work in this area) and then moving on to other engines (which need less work, but still need work). Doing this will not only improve support for web-wide math markup and its visual rendering, but will also improve the accessibility of math content on the web by making math a first-class content type in browsers. And you can even now contribute to this effort with a pledge of your own!

“But wait, why didn’t $browser_name already finish implementing MathML Core?” It doesn’t matter. Whether or not $browser_name (whichever one that is) should have done this by now, they haven’t. Maybe they would have done it eventually, but again, that doesn’t matter. We can make it happen now.

That’s what happened with :focus-visible in WebKit, which helped improve other engines; it’s what will happen with MathML Core in various browsers; and it could very well be what happens with other features in the future. Igalia would love nothing more than to see more and more projects launch, even if they don’t get hired to do the work for a single one of them. This isn’t us spackling over the cracks of browser teams’ neglect. This is us trying to chart an entirely new way to advance browser engines.

I go deeper into all of the above, as well as how Open Prioritization is designed to be an open forum and not some private reserve of Igalia’s, in a 17-minute talk delivered at W3C TPAC in fall 2021, available and captioned on Igalia’s YouTube channel. This post sort of summarizes it, but there are more examples and details in the talk, so if you’re interested, please do check that out.

Just in case your eyes sort of glazed and you skipped to the end to see if there was a TL;DR, here it is:

The addition of :focus-visible to WebKit was lead by the community, done by Igalia, and contributed to WebKit without any involvement from Apple except in the sense of their reviewing patches and accepting the contributions. Many of us are mad at Apple for a lot of good reasons, but please don’t let the process of venting that anger tar the goals and achievements of Open Prioritization. The future browser-feature priority you save may be your own.


Get Static

Published 3 years, 11 months past

If you are in charge of a web site that provides even slightly important information, or important services, it’s time to get static.  I’m thinking here of sites for places like health departments (and pretty much all government services), hospitals and clinics, utility services, food delivery and ordering, and I’m sure there are more that haven’t occurred to me.  As much as you possibly can, get it down to static HTML and CSS and maybe a tiny bit of enhancing JS, and pare away every byte you can.

Because too many sites are already crashing because their CMSes can’t keep up with the traffic surges.  And too many sites are using dynamic frameworks that drain mobile batteries and shut out people with older browsers.  That’s annoying and counter-productive in the best of times, but right now, it’s unacceptable.  This is not the time for “well, this is as performant as our stack gets, so I guess users will have to live with it”.  Performance isn’t just something to aspire to any more.  Right now, in some situations, performance could literally be life-saving to a user, or their family.

We’re in this to serve our users.  The best service you can render at this moment is to make sure they can use your site or service, not get 502/Bad Gateway or a two-minute 20%-battery-drain page render.  Everything should take several back seats to this effort.

I can’t tell you how best to get static — only you can figure that out.  Maybe for you, getting static means using very aggressive server caching and a cache-buster approach to updating info.  Maybe it means using some kind of static-render plugin for your CMS.  Maybe is means accelerating a planned migration to a static-site CMS like Jekyll or Eleventy or Grav.  Maybe it means saving pages as HTML from your browser and hand-assembling a static copy of your site for now.  There are a lot of ways to do this, but whatever way you choose, do it now.


Addendum: following are a few resources that can help. Please suggest more in the comments!


Trying to Work From Home

Published 4 years past

I’ve been working from home for (checks watch) almost 19 years now, and I’d love to share some tips with you all on how to make it work for you.

Except I can’t, because this has been incredibly disruptive for me.  See, my home is usually otherwise empty during the day — spouse at work, kids at school — which means I can crank up the beats and swear to my heart’s content at my code typos.  Now, not only do I have to wear headphones and monitor my language when I work, I also am surrounded by office-mates who basically play video games and watch cat videos all day, except for those times when I really get focused on a task, when they magically sense it’s the perfect time to come ask me random questions that could have waited, derailing my focus and putting me back at square one.

Which, to most of you used to working in an office setting, I suppose might seem vaguely familiar.  I’m not used to it at all.

Here’s what I can tell you: if you’re having trouble focusing on work, or anything else, it’s not that you’re terrible at working from home or bad at your job.  It’s that you’re doing this in a set of circumstances completely unprecedented in our lifetimes.  It’s that you’re doing this while worried not only about keeping yourself and your loved ones safe from a global pandemic, but probably also worried about your continued employment — not because you’re doing badly, but because the economy is on the verge of freezing up completely.  No spending means no business income means no salaries means no money to spend.

We can hope for society-level measures to unjam the economic engine, debt leniency or zero-interest loans or Universal Basic Income or what have you, but until those measures exist and begin to work together, we’re all stumbling scared in a pitch-black forest.  Take it from someone who has been engulfed by overwhelming, frightening, pitiless circumstances before: Work can be a respite, but it’s hard to sustain that retreat.  It’s hard to motivate yourself to even think about work, let alone do a good job.

Be forgiving of yourself.  Give yourself time and space to process the fear, to work through it and you.  Find a place for yourself in relation to it, so that you can exist beside it without it always disrupting your thoughts.  That’s the only way I know to free up any mental resources to try to do good work.  It also puts you in a place where you can act with some semblance of reason, instead of purely from fear.

Stay safe, friends.  We have a long, unknown road ahead.  Adjustment will be a long time in coming.  Support each other as much as you can.  Community is, in the end, the most resilient and replenishing force we have.


Running Code Over Time

Published 4 years, 2 months past

I’m posting this on the last day of 2019.  As I write it, the second post I ever made on meyerweb says it was published “20 years, 6 days ago”.  It was published on the second-to-last day of 1999, which was 20 years and one day ago.

Up top, the date when I wrote this post.  Down below, a five-day error.

What I realized, once the discrepancy was pointed out to me (hat tip: Eric Portis), is the five-day error is there because in the two decades since I posted it, there have been five leap days.  When I wrote the code to construct those relative-time strings, I either didn’t think about leap days, or if I did, I decided a day or two here and there wouldn’t matter all that much.

Which is to say, I failed to think about the consequences of my code running over long periods of time.  Maybe a day or two of error isn’t all that big a deal, in human-friendly relative-time output.  If a post was six years and two days ago but the code says 6 and 1, well, nobody will really care that much even if they notice.  But five days is noticeable, and what’s more, it’s a little human-unfriendly.  It’s noticeable.  It jars.

I think a lot of us tend not to think about running code over long time periods.  I don’t even mean long time periods in “Web years”, though I could — the Web is now just about three decades old, depending on when you reckon the start date.  (Births of the web are like standards — there are so many to choose from!)  I mean human-scale long periods of time, code that is old enough to have been running before your children were born, or maybe even you yourself.  Code that might still be running long after you die, or your children do.

The Web often seems ephemeral.  Trends shift, technologies advance, frameworks flare and fade.  There’s always so much new shiny stuff to try, new things to learn, that it seems like nothing will last.  We say “the Internet never forgets” (even though it does so all the time) but it’s lip service at this point.  And yet, the first Web pages are still online and accessible.

Which means the code that underpins them, from the HTML to the HTTP, is all still running and viable after 30 years.  The foresight that went into those technologies, and the bedrock commitment to consistency over long time frames, is frankly incredible.  It’s inspiring.  Compatibility both forwards and backwards in time, over decades and perhaps eventually centuries, is a remarkable achievement.

As I write this, I have yet to fix my relative-time code.  It’s on my list now, and I plan to get to it soon.  I presume the folks creating Intl.RelativeTimeFormat (hat tip: Amelia Bellamy-Royds) put a lot more thought into their code than I did mine, because they know they’re writing for long time frames.

What I need to do is adopt that same mindset.  I think we all do.  We should think of our code, even our designs, as running for decades, and alter our work to match.  I don’t necessarily know what that means, but we’ll never find out unless we try.


The Broken Physics of “The Umbrella Academy” Finale

Published 4 years, 10 months past

Not long ago, Kat and I got around to watching The Umbrella Academy’s first season on Netflix.  I thought it was pretty good!  It was a decent mix of good decisions and bad decisions by people in the story, I liked most of the characters and their portrayals, and I thought the narrative arcs came to good places. Not perfect, but good.

Except.  I have to talk about the finale, people.  I have to get into why the ending, the very last few minutes of season one, just didn’t work for me.  And in order to do that, I’m going to deploy, for the first time ever, a WordPress Spoiler Cut™ on this here blog o’ mine, because this post is spoilerrific.  Ready?  Here we go.

Massive, massive spoilers and a fair amount of science ahead!

Securing Web Sites Made Them Less Accessible

Published 5 years, 7 months past

In the middle of last month (July 2018), I found myself staring at a projector screen, waiting once again to see if Wikipedia would load.  If I was lucky, the page started rendering 15-20 seconds after I sent the request.  If not, it could be closer to 60 seconds, assuming the browser didn’t just time out on the connection.  I saw a lot of “the server stopped responding” over the course of a few days.

It wasn’t just Wikipedia, either.  CNN International had similar load times.  So did Google’s main search page.  Even this here site, with minimal assets to load, took a minimum of 10 seconds to start rendering.  Usually longer.

In 2018?  Yes.  In rural Uganda, where I was improvising an introduction to web development for a class of vocational students, that’s the reality.  They can have a computer lab full of Dell desktops running Windows or rows of Raspberry Pis running Ubuntu or whatever setup there is, but when satellites in geosynchronous earth orbit are your only source of internet, you wait.  And wait.  And wait.

I want to explain why — and far more importantly, how we’ve made that experience interminably worse and more expensive in the name of our comfort and security.

First, please consider the enormously constrained nature of satellite internet access.  If you’re already familiar with this world, skip ahead a few paragraphs; but if not, permit me a brief description of the challenges.

For geosynchronous-satellite internet access, the speed of light become a factor in ping times: just having the signals propagate through a mixture of vacuum and atmosphere chews up approximately half a second of travel time over roughly 89,000 miles (~152,000km).  If that all that distance were vacuum, your absolute floor for ping latency is about 506 milliseconds.

That’s just the time for the signals to make two round trips to geosynchronous orbit and back.  In reality, there are the times to route the packets on either end, and the re-transmission time at the satellite itself.

But that’s not the real connection killer in most cases: packet loss is.  After all, these packets are going to orbit and back.  Lots of things along those long and lonely signal paths can cause the packets to get dropped.  50% packet loss is not uncommon; 80% is not unexpected.

So, you’re losing half your packets (or more), and the packets that aren’t lost have latency times around two-thirds of a second (or more).  Each.

That’s reason enough to set up a local caching server.  Another, even more pressing reason is that pretty much all commercial satellite connections come with data caps.  Where I was, their cap was 50GB/month.  Beyond that, they could either pay overages, or just not have data until the next month.  So if you can locally cache URLs so that they only count against your data usage the first time they’re loaded, you do that.  And someone had, for the school where I was teaching.

But there I stood anyway, hoping my requests to load simple web pages would bear fruit, and I could continue teaching basic web principles to a group of vocational students.  Because Wikipedia wouldn’t cache.  Google wouldn’t cache.  Meyerweb wouldn’t cache.  Almost nothing would cache.

Why?

HTTPS.

A local caching server, meant to speed up commonly-requested sites and reduce bandwidth usage, is a “man in the middle”.  HTTPS, which by design prevents man-in-the-middle attacks, utterly breaks local caching servers.  So I kept waiting and waiting for remote resources, eating into that month’s data cap with every request.

The drive to force every site on the web to HTTPS has pushed the web further away from the next billion users — not to mention a whole lot of the previous half-billion.  I saw a piece that claimed, “Investing in HTTPS makes it faster, cheaper, and easier for everyone.”  If you define “everyone” as people with gigabit fiber access, sure.  Maybe it’s even true for most of those whose last mile is copper.  But for people beyond the reach of glass and wire, every word of that claim was wrong.

If this is a surprise to you, you’re by no means alone.  I hadn’t heard anything about it, so I asked a number of colleagues if they knew about the problem.  Not only had they not, they all reacted the same way I did: this must not be an actual problem, or we’d have heard about it!  But no.

Can we do anything?  For users of up-to-date browsers, yes: service workers create a “good” man in the middle that sidesteps the HTTPS problem, so far as I understand.  So if you’re serving content over HTTPS, creating a service worker should be one of your top priorities right now, even if it’s just to do straightforward local caching and nothing fancier.  I haven’t gotten one up for meyerweb yet, but I will do so very soon.

That’s great for modern browsers, but not everyone has the option to be modern.  Sometimes they’re constrained by old operating systems to run older browsers, ones with no service-worker support: a lab full of Windows XP machines limited to IE8, for example.  Or on even older machines, running Windows 95 or other operating systems of that era.  Those are most likely to be the very people who are in situations where they’re limited to satellite internet or other similarly slow services with unforgiving data caps.  Even in the highly-wired world, you can still find older installs of operating systems and browsers: public libraries, to pick but one example.  Securing the web literally made it less accessible to many, many people around the world.

Beyond deploying service workers and hoping those struggling to bridge the digital divide make it across, I don’t really have a solution here.  I think HTTPS is probably a net positive overall, and I don’t know what we could have done better.  All I know is that I saw, first-hand, the negative externality that was pushed onto people far, far away from our data centers and our thoughts.

My thanks to Tim Kadlec and Ethan Marcotte for their feedback and insight while I was drafting this post, and to Lara Hogan and Aaron Gustafson for their early assistance wth my research.


Increasing Accessibility

Published 6 years, 2 months past

Thanks to the fantastic comments on my previous post, I’ve made some accessibility improvements.  Chief among them: adding WAI-ARIA role values to various parts of the structure.  These include:

  • role="banner" for the site’s masthead
  • role="navigation" added to the navigation links, including subnavigation links like previous/next posts
  • role="main" for the main portion of a page
  • role="complementary" for sidebars in the blog archives
  • role="article" for any blog post, whether there are several on a page or just one

In addition, I restored skip links to the masthead of most pages (the rest will get them soon).  The links are revealed on keyboard focus, which I’m not sure I like.  I feel like these aren’t quite where they need to be.  A big limitation is the lack of :matches() (or similar) support in browsers, since I’d love to have any keyboard focus in the masthead or navigation links bring up the skip links, which requires some sort of parent selection.  I may end up using a tiny bit of enhancing Javascript to make the links’ UX more robust in JS situations, but still obviously available if JS fails.  And I may replicate them in the footer, as a way to quickly jump back up the page, especially to the navigation.

Speaking of the navigation links, they’ve been moved in the source order to match their place in the visual layout.  My instincts with regard to source order and layout placement were confirmed to be woefully out of date: the best advice now is to put the markup where the layout calls for the content to be.  If you’re putting navigation links just under the masthead, then put their markup right after the masthead’s markup.  So I did that.

The one thing I didn’t change is heading levels, which suffer all the usual problems.  Right now, the masthead’s “meyerweb.com” is always an <h1> and the page title (or blog post titles) are all <h2>.  If I demoted the masthead content to, say, a plain old <div>, and promoted the post headings, then on pages like the home page, there’d be a whole bunch of <h1>s.  I’ve been told that’s a no-no.  If I’m wrong about that, let me know!

There’s still more to do, but I was able to put these into place with no more than a few minutes’ work, and going by what commenters told me, these will help quite a bit.  My thanks to everyone who contributed their insights and expertise!


Browse the Archive

Earlier Entries