Joined the Austin Group

December 11th, 2010

So I signed up (privately, in a personal capacity) to the Austin Group lists. I want to get more involved in both POSIX, and in LSB. Both of these are fundamental standards of which we need more, not fewer. There’s a reason why increasing amounts of hardware works with Linux (other than purely the dedicated work of many developers) and that is because today’s hardware is built using well defined standards. A lot of fundamental software is, too, but far from all of it. I think a lot of people simply dismiss efforts like LSB as being stuffy and antiquated, completely failing to see the value in not repeating the UNIX Wars. I have a lot of standards reading to do myself, and I don’t profess to be an expert on the LSB (yet), but I do plan on that changing.

Jon.

[rant] Desktop application complexity

December 11th, 2010

So it used to be (back in the day), that you would start an application, it would read some configuration, and you would be done. The application would pretty much be self contained, and so would its configuration (none of this windows registry nonsense). Heck, you could even read the configuration with a text editor and make useful changes without devoting time and effort to knowing all of the pieces of an entire stack that keeps changing over time.

These days we have many bits and pieces that are needed just to get my rhythmbox application to run. Tonight’s irritation was caused by gnome-settings-daemon, which seems to have been designed with only local desktop/laptop users in mind, or at least completely fails to handle NFS shares that have gone away. A bit of digging determined the problem, but debugging this is beyond most users. Most users, who start to find applications won’t run or buttons won’t do stuff will just assume the world has imploded and do the windows thing, rebooting their computers. I would love it if we would have either fewer of these random components or a lot more collective knowledge about their design and interaction (documentation, less frequently changing core pieces, whatever). That way, Google wouldn’t lead me only to pages from clueless users having the same problem telling each other to reboot their Linux computers. This is a problem. It is not a good future when everything is so tenuous that users have to reboot their computers to make things work right again.

Jon.

Why automatically push to rawhide?

December 8th, 2010

One of the things that bothers me about Fedora development is that things automatically end up anywhere after being built. I’m a big believer in having to do something to put software in the hands of users, even if they are running a development distribution, and even if they should “be able to fix whatever breaks”. Today, if you build something in rawhide, it’ll land on user machines tomorrow (in the default case). This applies especially if you do a “known good” build and then do “just one more” before the early morning hours when the mirrors get updated with today’s version.

In my opinion, rawhide isn’t a playpen. It’s supposed to be a place where things bake, but it should not be the place where random crap is shoved that might (or might not) randomly break things because it hasn’t even been tested on a local machine somewhere first. I think packages should always at least pass a boot/login test (or whatever appropriately similar activity) on a local system before they are made available in rawhide, and there should be some minimal activity involved to indicate that a package is intended for pushing (not just a build that wasn’t done as a scratch or that needs to be tested, etc.). It doesn’t have to require any proventester, any specific QE, any whatever, but it should require that the packager type something that indicates they built this package intending for it to be used by a lot of rawhide users. Again, rawhide is for fun, but it’s not a playpen.

Jon.

[rant] How do I do that?

December 3rd, 2010

So, I was sitting in a hotel room earlier looking at their (simple) instructions for folks to configure their windows laptops. The instructions say exactly what menu options to select, where to find them, etc. Those menu items are always in the same place, with the same names, and have the same functionality across all updates.

I’ve been thinking a lot recently about how Linux is increasingly defining inter-operability based on the random decisions of engineers hacking on projects, and not on industry standards. Engineers who run the latest and greatest software at all times are absolutely the last people who should ever get to make such decisions because they will never notice when someone with a three-year-old distro runs into trouble. Conversely, the groaning starts the moment something is more than ten minutes old. In the real world (a place where people don’t run rawhide, and might be on F-12 by now…maybe – trust me, I talk to computer people who use Linux but aren’t hard core Linux kernel/distro hackers every day), people don’t upgrade every day, don’t run the latest software, and compatible standards matter.

Not only should we not re-invent the wheel as often as does happen (for no gain other than to make it impossible to have a simple “Linux way” to do something – sure your whizz-bang solution boots faster and looks prettier but I never asked you for it and what I had was probably good enough), but we should actually use industry bodies to produce standards, or at least make a bigger effort to standardize. I want, in the future to walk into a hotel room and see “this is how you do this in Linux” instructions, not instructions for Fedora, Ubuntu, blah blah. Because in the latter case nobody is going to care enough to do it and Linux will continue to be a mere afterthought.

Two steps to fix this problem:

  • 1). Every project should have architects who set direction and whose opinion counts as gospel on decisions that will impact user experience. They can veto the silly wheel re-invention exercises. People who don’t like that can go hack on Linux From Scratch in their basement.
  • 2). Every project should work with independent industry bodies to standardize the moment some new feature comes along. So that there is one “Linux way” to do it, and not ten different but similar ways to do the same thing.

That’s my two cents (sense) on a Friday. If those things happened, users would be the better for it. You may now arrogantly tell me how wrong I am and how much better the world would be if none of that happened.

Jon.

Automated recording with Asterisk

November 20th, 2010

So I decided to write a recipe for recording conference calls with Asterisk. Let’s say you have a dialin bridge number and a bridge passcode and you want your server to connect at certain times, announce that you are going to record (and periodically remind that you are doing so), and then do so. Optionally, you might ask the moderator to confirm this by pressing a key. You might spend many hours figuring this out, which is why I’m going to show you how.

First, you need some canned audio files. I use the following in my example (which I record using a studio microphone and then convert into sln files with the sox command: sox jcm-beep.wav -t raw -r 8000 -s -c 1 jcm-beep.sln):

  • jcm-beep – a tone indicating recording is ongoing
  • jcm-record_question – ask the moderator if they want to record
  • jcm-record_confirm – confirm that you will begin recording
  • jcm-record_cancel – announce that you have not made a recording
  • jcm-record_reminder – reminder that you are recording
  • jcm-record_limit – announce that recording time has been exceeded
  • jcm-record_timeout – announce that the limit for a response was exceeded

With those recordings in place (/usr/share/asterisk/sounds for example), add the following subroutine “context” to your /etc/asterisk/extensions.conf:

[conf-record]

exten => h,1,Set(GOSUB_RESULT=ABORT)
exten => h,2,Return()

exten => s,1,Answer()
exten => s,2,Wait(2)
exten => s,3,Set(STARTSTAMP=${EPOCH})
exten => s,4,Set(TIMEOUTSECS=30)                                                ; 30 second timeout
exten => s,5,SendDTMF(00${CONFCALLPWD},200)
exten => s,6,Wait(15)
exten => s,7,Set(WAITSTART=${EPOCH})
exten => s,8,GotoIf($[${ASKTORECORD} == 1]?100)
exten => s,9,Goto(9,1)

exten => s,100,Set(WAITTIME=$[${EPOCH}-${WAITSTART}])
exten => s,101,BackGround(jcm-record_question)
exten => s,102,WaitExten(5)
exten => s,103,GotoIf($[${WAITTIME} <= ${TIMEOUTSECS}]?100)
exten => s,104,PlayBack(jcm-record_timeout)
exten => s,105,Set(GOSUB_RESULT=ABORT)
exten => s,106,Return()

exten => 6,1,Answer()
exten => 6,2,PlayBack(nothing-recorded)
exten => 6,3,PlayBack(goodbye)
exten => 6,4,Set(GOSUB_RESULT=ABORT)
exten => 6,5,Return()

exten => 9,1,Answer()
exten => 9,2,Set(MAXRECORDSECS=4000)                                            ; just over 1 hour
exten => 9,3,GotoIf($["${CONFCALLNAME}" != ""]?100)
exten => 9,4,Set(CONFCALLNAME=conf-call-${STRFTIME(,,%Y%m%d-%H%M%S)})
exten => 9,5,Goto(100)

exten => 9,100,Set(CALLREMINDBEEPINT=180) ; time in secs between beeps
; every 3 minutes
exten => 9,101,Set(CALLREMINDMSGMULT=5) ; number of beeps before message        ; every 15 minutes
exten => 9,102,Set(CALLREMINDBEEPCNT=0) ; current number of beeps
exten => 9,103,Set(CALLFILENAME=${CONFCALLNAME}-${STRFTIME(,,%Y%m%d-%H%M%S)})
exten => 9,104,Monitor(wav,${CALLFILENAME},m)
exten => 9,105,Set(RECORDSTART=${EPOCH})
exten => 9,106,PlayBack(jcm-beep)
exten => 9,107,PlayBack(jcm-record_reminder)
exten => 9,108,Goto(200)

exten => 9,200,Set(RECORDTIME=$[${EPOCH}-${RECORDSTART}])
exten => 9,201,Wait(${CALLREMINDBEEPINT})
exten => 9,202,PlayBack(jcm-beep)
exten => 9,203,Set(CALLREMINDBEEPCNT=$[${CALLREMINDBEEPCNT}+1])
exten => 9,204,GotoIf($[${CALLREMINDBEEPCNT} < ${CALLREMINDMSGMULT}]?210)
exten => 9,205,PlayBack(jcm-record_reminder)
exten => 9,206,Set(CALLREMINDBEEPCNT=0)
exten => 9,207,Goto(210)
exten => 9,210,GotoIf($[${RECORDTIME} <= ${MAXRECORDSECS}]?200)
exten => 9,211,PlayBack(jcm-record_limit)
exten => 9,212,Set(GOSUB_RESULT=ABORT)
exten => 9,213,Return()

Notice that there are still a few hard-coded variables in there, and that the bridge password is always prefixed with a “00″ since these first digits are generally lost in my case). There are two good ways to use this. The first is to add a test extension on your local server, which you can do using the Dial command with an option including U(conf-record). Since there’s a bug in the version of Asterisk I am running (now fixed upstream), I also append S(1) to force the call to hangup on successful connection. A more useful way to use this is with a call file. Create something like the following call_fedora_talk.call file (I prefix variables with double underscores to ensure they are always inherited with call properties):

Channel:	SIP_OR_IAX_TRUNK_HERE/19783038021
Callerid:	YOUR_CALLER_ID
WaitTime:	60
Context:	conf-record
Extension:	s
Priority:	1
SetVar:		__CONFCALLNAME=fedora-talk-testcall
SetVar:		__ASKTORECORD=1
SetVar:		__CONFCALLPWD=2014

This is configured to connect to Fedora Talk conference room 2014 (general purpose room), announce itself, ask if it is ok to record, and then begin the recording (some conferenece systems don’t pass through DTMF, so you might need to explicitly disable ASKTORECORD in those cases – especially if you have permission anyway). If you copy this file into your /var/spool/asterisk/outgoing directory with correct ownership (for example, using vixie cron) then Asterisk will detect it and begin the recording. Those on the call will hear (for example): “beep…just a reminder that this call is being recorded”. Then every 3 minutes they will hear a (gentle!) sound, and every 15 they will hear an audible reminder that the call is being recorded like at the beginning. After recording, the call will be placed in the usual monitor directory. Have fun!

Jon.

Android and Linux

November 18th, 2010

So I went to the BLU – Boston Linux User Group – meeting tonight. I hadn’t been in a long time, but the talk was by a guy (Greg Milette) writing actual Android apps that have sold in the app. Market, so I thought it might be fun. It certainly was interesting to hear about practical app. development, not just listening to the same folks bitching again about how Google might or might not have modified the kernel and other bits to provide their desired user experience. Greg demonstrated writing an app, building, and testing, it, and I certainly got the feeling that my intention to have a small app that updates my GPS location on a server periodically (in order to allow me to have my Asterisk and IRC servers automatically route calls/set me away, etc.) would not be too tricky to pull off in a reasonable amount of time. I upgraded my Nexus One to CyanogenMod 6.0 recently and think I’ll find some time to play sometime.

Linux User Groups used to be very hard core affairs, where you’d have some pretty meaty stuff. These days, I generally avoid them because they’re catering to a different audience – not a wrong audience, but just not my cup of coffee. These days it’s often about pretty GUI stuff I don’t care about, and very “high level” discussion. And everyone has a laptop or netbook open and is reading email rather than paying attention. Tonight wasn’t too bad, although there was a guy in front who had discovered a “cow” application that could be run on the command line and would display a picture of a cow with a speech bubble full of whatever text he passed to it. That guy sat there for about 20 minutes (at least) playing with this, typing in various text, laughing to himself, and reminding me why I stopped going to LUGs at least 5 years ago (or maybe that was when he was on YouTube and found a video of dancing cows to complete the theme for his evening). Of course, all of this was on a laptop running Ubuntu. As was everyone else. Nobody there was running anything other than Ubuntu. Not a Fedora in sight. I have opinions on pragmatism that I believe explain precisely why nobody was running Fedora, but nobody is interested in hearing those anyway.

Jon.

Rant: Linux Wars

November 14th, 2010

So, I got into Linux almost 15 years ago now. Back in the day, Linux was about having a cheap and convenient UNIX-like system for those who couldn’t afford expensive Sun hardware at home or who wanted to get more involved with understanding and hacking on the internals. Linux benefited enormously from the fallout of the “UNIX Wars”, which had seen different UNIX vendors attacking one another, huge amounts of fragmentation etc., before a re-unification effort centered around common standards, like POSIX (and later, SUS). Yes, these standards are not perfect, but they send a strong message of intent.

Because we had common standards for low-level pieces, for interoperability, etc. we could at least make an effort to have portable software between very different systems. The reality wasn’t always rosy, but the intent was there. Everything from networked filesystems (NFS) to graphical desktop (X) was centered around understanding that there were “others” out there you needed to play nicely with. You could go implement some shiny new feature in a silo, or make your version do things in a radically new way, but the implosion of the UNIX market had demonstrated the futility of doing that in a vacuum without putting some effort into common solutions that others were going to get behind. Or at least not intentionally doing things differently in a way that couldn’t be easily integrated with other systems later on.

And then Linux grew up and became all sexy. Those involved started changing, not necessarily in bad ways, but just different. Suddenly not everyone enjoyed using terminals, typing commands, or having various daemons and services around. They wanted something for “mainstream” users that had all the fluff and shine of other Operating Systems. And so various projects spawned up over the last decade, seemingly out of a necessity, following the “Bazaar is always best” philosophy (which typically also comes with a hefty dose of libertarian-like laissez-faire thinking). New protocols, new interfaces, whole new approaches just kind of happened on us without any real co-ordinated thinking at a much higher level. So now we have many difference folks pulling in different directions. And each year one “Linux” becomes more different than the next “Linux”. Some want compatibility and standards based development (even if it’s lousy at times). Others want “OMG, not some lame standard, pah! we’re the best! just do it!” and for Linux to do its own thing entirely. Neither approach is entirely correct, nor entirely wrong. But we’re not learning from UNIX either.

Anyway. Now would be a very great time for us to take a deep breath and ask ourselves if we want to have the next ten years be like the last. Do we want to continue along a path that is going to increasingly see two, three, or more “solutions” to common Linux issues, with vendors getting behind one or another, and folks criticizing projects for NIH mentality? Or do we want to have a moment of Zen where we realize we’d be better off playing nicely together on a more comprehensive approach to world domination? I personally would like to see industry standards bodies like the Linux Foundation drive a few years of stabilization and standardization wherein we get behind common Linux ways to do things so we don’t turn into the next incarnation of UNIX.

I speak only for myself, etc.

Jon.