Archive for December, 2010

My car crash

Saturday, December 11th, 2010

Photo: My poor little car.

So I had a car crash outside the office the other day. Nobody was hurt, but my car is going to be in the shop for a while. I was turning left into the parking lot, yielded on a green light, but didn’t see the other guy who was motoring on through the lights in a car with dealer plates on because he was obscured by a very large construction truck. He hit me at the passenger-side rear of the car, taking out the bumper, side panel, and at least hurting the alignment. I don’t consider the accident my fault, but I’m pretty sure the state of Massachusetts will, since I was making a left turn (this being number 15 on the list of findings of fault they prescribe). To add insult to injury, I was given a “failure to yield” warning (by an officer who turned up after the event) that I contend was not the case but cannot legally appeal because there is no penalty associated with this (unless I get a second ticket in the next twelve months).

Not having had a crash before (other than hitting a cow in a rental some years ago), and therefore not knowing any good body shops, I took the car to an insurer recommended facility in Cambridge (it was drivable, but I kept it to under 40 all the way home to avoid any of the panels becoming further damaged). This being the US, and therefore generally irrationally fearful of government or regulation in general, we have no good consumer standards agencies run by the federal or state government (BBB doesn’t count), so it’s really the Wild West when it comes to knowing if a body shop is any good. Private companies like Yelp try to solve a problem that should be solved at the national level and can only provide some data points to feed into a decision. The body shop seems ok, but their state filings show that they moved from Brighton to Cambridge earlier this year (why was that?) and that they’ve paid fines several years in a row for being late filing (but is that even unusual when it comes to this kind of place? does that mean anything other than that they fix cars but aren’t trained lawyers or accountants?). In the end, all I could go on was the advice of the insurer, my inclination that the damage wasn’t hugely structural, and the professionalism of the person that I spoke with on the phone and subsequently met in person when I dropped the car off. Anyway, I hope that works out ok.

In the wake of this accident, I’ve had several ideas for rather diabolical (but legal) means to avoid a repeat, and certainly to avoid being accused of failing to yield or acting other than in an exemplary fashion at all times. This will involve a large quantity of sensors, cameras, and computing power installed in the car when I get it back again.


Joined the Austin Group

Saturday, December 11th, 2010

So I signed up (privately, in a personal capacity) to the Austin Group lists. I want to get more involved in both POSIX, and in LSB. Both of these are fundamental standards of which we need more, not fewer. There’s a reason why increasing amounts of hardware works with Linux (other than purely the dedicated work of many developers) and that is because today’s hardware is built using well defined standards. A lot of fundamental software is, too, but far from all of it. I think a lot of people simply dismiss efforts like LSB as being stuffy and antiquated, completely failing to see the value in not repeating the UNIX Wars. I have a lot of standards reading to do myself, and I don’t profess to be an expert on the LSB (yet), but I do plan on that changing.


[rant] Desktop application complexity

Saturday, December 11th, 2010

So it used to be (back in the day), that you would start an application, it would read some configuration, and you would be done. The application would pretty much be self contained, and so would its configuration (none of this windows registry nonsense). Heck, you could even read the configuration with a text editor and make useful changes without devoting time and effort to knowing all of the pieces of an entire stack that keeps changing over time.

These days we have many bits and pieces that are needed just to get my rhythmbox application to run. Tonight’s irritation was caused by gnome-settings-daemon, which seems to have been designed with only local desktop/laptop users in mind, or at least completely fails to handle NFS shares that have gone away. A bit of digging determined the problem, but debugging this is beyond most users. Most users, who start to find applications won’t run or buttons won’t do stuff will just assume the world has imploded and do the windows thing, rebooting their computers. I would love it if we would have either fewer of these random components or a lot more collective knowledge about their design and interaction (documentation, less frequently changing core pieces, whatever). That way, Google wouldn’t lead me only to pages from clueless users having the same problem telling each other to reboot their Linux computers. This is a problem. It is not a good future when everything is so tenuous that users have to reboot their computers to make things work right again.


Why automatically push to rawhide?

Wednesday, December 8th, 2010

One of the things that bothers me about Fedora development is that things automatically end up anywhere after being built. I’m a big believer in having to do something to put software in the hands of users, even if they are running a development distribution, and even if they should “be able to fix whatever breaks”. Today, if you build something in rawhide, it’ll land on user machines tomorrow (in the default case). This applies especially if you do a “known good” build and then do “just one more” before the early morning hours when the mirrors get updated with today’s version.

In my opinion, rawhide isn’t a playpen. It’s supposed to be a place where things bake, but it should not be the place where random crap is shoved that might (or might not) randomly break things because it hasn’t even been tested on a local machine somewhere first. I think packages should always at least pass a boot/login test (or whatever appropriately similar activity) on a local system before they are made available in rawhide, and there should be some minimal activity involved to indicate that a package is intended for pushing (not just a build that wasn’t done as a scratch or that needs to be tested, etc.). It doesn’t have to require any proventester, any specific QE, any whatever, but it should require that the packager type something that indicates they built this package intending for it to be used by a lot of rawhide users. Again, rawhide is for fun, but it’s not a playpen.


[rant] How do I do that?

Friday, December 3rd, 2010

So, I was sitting in a hotel room earlier looking at their (simple) instructions for folks to configure their windows laptops. The instructions say exactly what menu options to select, where to find them, etc. Those menu items are always in the same place, with the same names, and have the same functionality across all updates.

I’ve been thinking a lot recently about how Linux is increasingly defining inter-operability based on the random decisions of engineers hacking on projects, and not on industry standards. Engineers who run the latest and greatest software at all times are absolutely the last people who should ever get to make such decisions because they will never notice when someone with a three-year-old distro runs into trouble. Conversely, the groaning starts the moment something is more than ten minutes old. In the real world (a place where people don’t run rawhide, and might be on F-12 by now…maybe – trust me, I talk to computer people who use Linux but aren’t hard core Linux kernel/distro hackers every day), people don’t upgrade every day, don’t run the latest software, and compatible standards matter.

Not only should we not re-invent the wheel as often as does happen (for no gain other than to make it impossible to have a simple “Linux way” to do something – sure your whizz-bang solution boots faster and looks prettier but I never asked you for it and what I had was probably good enough), but we should actually use industry bodies to produce standards, or at least make a bigger effort to standardize. I want, in the future to walk into a hotel room and see “this is how you do this in Linux” instructions, not instructions for Fedora, Ubuntu, blah blah. Because in the latter case nobody is going to care enough to do it and Linux will continue to be a mere afterthought.

Two steps to fix this problem:

  • 1). Every project should have architects who set direction and whose opinion counts as gospel on decisions that will impact user experience. They can veto the silly wheel re-invention exercises. People who don’t like that can go hack on Linux From Scratch in their basement.
  • 2). Every project should work with independent industry bodies to standardize the moment some new feature comes along. So that there is one “Linux way” to do it, and not ten different but similar ways to do the same thing.

That’s my two cents (sense) on a Friday. If those things happened, users would be the better for it. You may now arrogantly tell me how wrong I am and how much better the world would be if none of that happened.