Archive for the ‘General’ Category

Open Hardware Summit 2011

Saturday, September 17th, 2011

I attended the first OHS in 2010 and enjoyed it enough that I took another day off to be there once again this year. The event brings together folks from all different backgrounds and truly represents a melting pot of those with interests in the “Open Hardware” (open source designs, firmware, software, process) movement. In fact, I’d argue that the Open Hardware movement is more inclusive than Open Source software is at this point. There are far more women attending and speaking at these events (the conference is organized by women), combined with a lot less of the pretentious prima donnas you see in male dominated Open Source.

Having taken a super early (5am) train from Boston, I didn’t arrive until the keynote at 10am, so I missed the intro from Alicia and Ayah. The keynote was presented by the Arduino Team, who were part of “The Big Picture” track. As such, they addressed issues of scale and running a successful business – software can be entirely free, but hardware has an intrinsic cost and so there must always be a business model associated with it. Apparently, over 300K official Arduino Un* parts have been shipped (I assume the numbers don’t relate to clones – the total must be far higher, as even the statistics for software downloads exceed the total given), and at this point there are over 200 distributors in many countries, etc.

After the keynote, in the same track, Kate Hartman of OCAD University gave a talk on “Edges, Openings, and In-Betweens”. Then, Eric Wilhelm of Instructables (apparently recently acquired by Autodesk) talked about how the site allows “13 year old boys of all ages and genders” to submit an insane number of Open Source K’Nex gun designs as well as many other forms of Open Hardware. Finally, Bunnie Huang discussed how the “Best Days of Open Hardware are Yet to Come”. The talk focussed on how Moore’s Law is dead and in the coming years we’ll all have “heirloom laptops”, etc. A wonderful fiction, sure to get a lot of PR in certain media, but there’s more to life than the x86-centric view of Megahertz. The future will be around hyperscale, low power parts and tight integration thereof. Innovation may have to focus on areas other than raw performance numbers, but that has been true for most of this decade already. But there is some truth in the idea that Open Hardware will benefit from the demise of Moore’s Law. It will possible for some hobbyist hacker with an inexpensive FPGA to build some truly insane stuff in the coming years – but the playing field will never be as even as imagined in the talk. I enjoyed the theatrics and use of a Chumby prototype board to overlay tweets/SMSes on the presentation.

Next up came the legal track (“Open Source Hardware Legal Landscape”). Myriam Ayass talked about CERN’s implementation of an OHS inspired/derived Open Hardware License and how various changes will be made in the wake of the formalization of the Open Hardware Definition. Alison Powell talked about the “4 freedoms” (derived from Open Source) and how they relate to Open Hardware. Michael Weinberg of Public Knowledge discussed how his organization reaches out to those in Congress and educates them around technology such as the 3D printing movement of late. They have put on public demonstrations of the technology on Capitol Hill so that those in the House and Senate can understand the idiocy of imposing DMCA-style restrictions on 3D printing.

I ran into my (awesome) publisher at the event, and so I spent lunch discussing the current Linux book I’m working on. We both enjoy the same kinds of events and people and were able to reconnect with Alicia Gibb amongst others over lunch. I reminded Alicia that I’d connected her with Val from the Ada Initiative a few months back – need to followup on that front, since there’s a lot of cross over from Open Hardware into OSS.

First topic of the afternoon was “Open Hardware & Social Change”. Gabriella Levine of Protei spoke about an Open Design for an autonomous sailboat that can be used to deploy various oil spill cleanup technology (even in storms). Very cool. Next came Shigeru Kobayashi of the Gainer project, which is using Open Hardware technology to track nuclear radition levels across Japan independently of the government (there’s a lack of trust there after the apparent failure of Tokyo to admit to the severity of the incident). Finally, Zach Lieberman received several rounds of applause for The Eyewriter Initiative, which is using Open Hardware eye tracking technology to allow a grafitti artist suffering from ALS who only has use of his eyes to continue his art. Not only do they have an awesome platform for art creation using eye tracking, but they have projected the designs in real time from a hospital bed onto the walls of buildings in LA. Awesome.

Second topic track of the afternoon was “Forging an Open Hardware Community”. Eric Craig Doster of iFixit talked about how they got started and how they build community, and how they leverage teardowns for PR. I asked how many iPads they had to go through (for example) to get it right. He said the most units they have gone through is 3, but they usually get it in 1. Autumn Wiggins discussed “The Upcycle Exchange”, which applies Open Source concepts to Indie Craft. Upcycling is all about re-use of things other people might regard as trash. The Upcycle Exchange is in St. Louis, which was a nice reminder that Open Hardware isn’t limited to the East/West Coasts. It’s only been going a year, but hopefully will continue to grow! Finally, Bre Perris of MakerBot showed some hilarious 3D print designs involving Gangstas and other silliness. There was also a lot of seriousness – including the revelation that Makerbot has brought in over $10 Million.

The third track of the afternoon was on “From Small Scale Fabrication to Large Scale Collaboration”. First up, Haig Norian discussed work being done at Columbia using organic circuits to build Open Hardware ICs, and the challenges of applying Open Hardware to traditional fabrication (which uses a lot of NDAs and closed source processes). Next, Geoffrey Barrows of Centeye discussed their low pixel, low power miniature camera technology that can be used in all manner of applications. Portions of the technology are being Opened now, which is a big change for a company that has been traditionally very secretive (having government contracts, etc.). They will have a low-cost camera available for Open Hardware enthusiasts soon. The demos involved self guiding mini helicopters and autonomous drones. Next up, an Open laser saw from The Lasersaur Project. Addie Wagenknecht discussed getting funding (including dealing with the bank of mom – in this case an economist – thinking the project was insane to begin with). The demos were impressive. After that, Daniel Reetz (whose day job is with Internet Archive) talked about DIY Book Scanning using low cost technology built using digital cameras and fancy software able to dewarp images, etc. Next, Mark Norton of the Open Source Ecology project talked about their Steam Engine project. They’re building an Open steam engine for use by remote communities, and are enjoying the ability to use out of copyright books while also improving on very old designs. Finally Bruce Perens talked about the longstanding co-operation between NASA and Commercial space flight and Ham radio, including the deployment of AMSATs.

The final joint track was entitled “Starting up in Open Hardware”. Amanda Wozniak of Wyss Institute talked about the Engineering Process used in commercial operations and the importance of documentation. James Bowman
discussed how Gameduino went from the Kickstarter concept to a product in 90 days (and how they handled the scale of the initial orders). Justin Downs of Ground Lab talked about “how open development sustains small business and drivers innovation”. Then, Bryan Newbold of Octopart gave an insightful talk on “Economics of Electronic Components for Small Buyers”: scaling up to take advantage of price breaks is important, but it might not be necessary to go from 2,000 units to 10,000 or more. Then, Nathan Seidle gave an outstanding talk (including real numbers) on how Sparkfun got going for him right out of college with no previous experience as a business owner. Nathan is truly inspiring at the best of times and “Where does transparency end” made many good points – including that you don’t need everything to be open. Your customers don’t need to know how your internal logistics process works, as an example. Finally, Mitch Altman of TV-B-Gone (a personal favorite product that I keep on hand for emergency use) talked about making a business from your fun project.

The last track of the day was “Breakouts”, which was divided between a number of rooms. I attended the educational track, which was interesting, although I was suffering from the long day by that point so I didn’t get a lot from the session. The general demos afterward were very cool. I saw several 3D printers that I’d not run across before from Ultimaker as well as the Makerbot, etc.

Overall, I was very impressed with the conference. The format was a little different this year. Most of the day was in one large auditorium that was completely filled (standing room only), and consisted almost of an “Ignite” style lightening talk format. I suspect future events will need more rooms (a nice problem to have), and I hope they’ll handle the growth well. One piece of advice for the organizers would be to require presenters to submit their talks in PDF prior to the event, and then to have one laptop drive all of the presentations. There were enough people with Linux laptops having to adjust settings or reboot, and one person using Microsoft Office on a Mac had three crashes in a row on the same set of slides. A simple rig with one laptop per room, running a PDF slideshow would be ideal. No modeline fiddling, no reboots, none of that typical conference nonsense.

Water flow: I need to know

Sunday, August 28th, 2011

So last time there was a major water crisis in the Cambridge area in which we had reduced pressure due to a water main leak, I noticed something cool: it seems that toilets (at least in this State, if not the entire US) – and presumably other non-essential water devices – have a special inlet valve that operates only when water pressure is above a certain level. The net effect (quite ingeniously) appears to be that, when the water pressure is reduced, the toilet won’t fill. Thus, the lesser available resource is better utilized helping people drink rather than flush toilets.

Anyway. My question is, who designed this? And how prolific is the adoption of this standard? And (presumably) did Cambridge intentionally reduce the water pressure today due to the hurricane in order to avoid sewers from overflowing as people used water in non-essential ways? Inquiring minds want to know.

Jon.

Open Letter to Southern Vectis

Wednesday, August 17th, 2011

ATTN: Alex Carter
Chief Executive
Southern Vectis

Dear Mr. Carter,

I have recently had the opportunity to use your bus services on the Isle of Wight. I must say, I have been very underwhelmed and disappointed in the service. Not only is it expensive (which can be forgiven for the size of the market served), but it is inefficient and it is very clear (to me)
that little or no attempt at co-ordination is being made between Southern
Vectis and ferry service operators on the Island (who deliver your captive audience and therefore are of some utility to your company). This point was driven home as we arrived in East Cowes exactly as the (hourly) ferry departed from the terminal, needlessly wasting an hour of our time.

Mr Carter, you need to reach out to Red Funnel and have a dialogue. Pick up the phone, call them, and steer the conversation toward how you can collectively work together to offer your respective customers the kind of great experience they should be demanding from you. Perhaps you can find whole new ways to expand your service (and financial return) through shared initiatives – beginning with co-ordinated services. Such an initiative would also necessitate that subsequent changes to your respective timetables are handled appropriately – don’t blame oneanother, embrace the business opportunity to succeed by reacting appropriately – by reaching out and having a renewed dialogue around how you can best resolve such issues.

Furthermore, you need to reconsider the impression upon your paying, law abiding customers of your overly aggressive use of video cameras and surveillance on board your buses. I realize a pandemic disease exists within the UK that causes people to embrace and believe such utterly intrusive use of cameras is reasonable, but ask yourself whether you actually need to have *four* individual cameras on just one level of your buses – complete with a TV screen passively aggressively reminding customers to conform to societal norms by displaying scrolling footage. Would even one camera not surfice? Ask yourself where it ends – a camera watching every seat individually?

Next time I visit the island, I hope to discover a Brave New World in
which you have spoken with Red Funnel, and in which I receive great
customer service in return for the fare level charged.

Yours,

Jon Masters

Porting Linux: part 1 (of many)

Sunday, June 5th, 2011

So I’m working on a book at the moment, to be title “Porting Linux”, which will cover the process of porting the kernel to new architectures (and platforms within those architectures). It happens to coincide with a number of interests of mine. Anyway, I thought I would start making some online notes about porting. This is the first in an ongoing series of mini-dumps of unorganized thoughts on the topics I am researching/working on for the book.

At a high level, a new architecture port[0] needs to cover the following topics:

  • Fundamentals – the bitness and endianness of the system (bitsperlong, byteorder, etc.). Stuff that goes in system.h includes read_barrier_depends handling, and instruction sync and memory barrier definitions.
  • Atomic and bit operations – atomic, bitops, swab, etc. Many of these are used generically by the reference asm-generic code and core kernel to implement higher level primitives.
  • CPU and cacheing – SMP setup, cache management, percpu bits, topology, procfs, etc. The CPU(s) are bootstrapped in head.S and friends, but then they need functions to handle non-MMU items such as IPI, etc.
  • Init – Entry into the kernel, establishing exception vectors, calling into start_kernel. This is head.S and friends.
  • Interrupts and exceptions – IRQ setup, traps, entry, etc. The low-level exceptions might live in head, but they will call into generic C-level code to implement various functionality (specific higher-level functions for e.g. VM live elsewhere)
  • IO operations – IO, PCI root setup, legacy IDE bits, etc. Various miscellaneous stuff, especially generic panic-inducing inb/outb functions on modern arches without separated IO memory).
  • Library functions – Checksum support, asm-optimized stuff not specifically in another subsystem.
  • Locking – Spinlock support
  • Memory management – Init, faults, TLB setup, page management, MMU contexts, memcpy, strings, etc.
  • Modules – Load, unload, and relocation
  • Signals – Signal contexts, signal delivery, compat signal handling
  • Tasks – current macros, thread_info, unistd, process, mmap, ELF and auxiliary vectors
  • Time – timex, time setup
  • Linking – asm-offsets, linkage, symbols exported in assembly, etc.
  • Console drivers – early_printk support and a minimal character driver. The only driver work actually required for a port includes being able to squirt stuff straight out the UART in early_printk, and minimally handle the boot console output.
  • Debugging – backtrace, opcode dissassembly, stack unwind, ftrace, kexec, kgdb, kprobes, ptrace

Those are the areas needed to be covered for a minimally working port.

Jon.

[0] Based on studying recent ports (tile, microblaze, etc.) from the first patch to the last, and long-time established existing ports (ARM, PowerPC, x86, etc.).

Response to “Why systemd?”

Friday, April 29th, 2011

So I read Lennart’s blog post entitled Why systemd?. In it, he makes a number of comparisons between systemd and the two other Linux init systems that are still in widespread use (this being the third init system some distributions have adopted within the last few years). Overall, he makes a good argument that systemd has many nice and exciting features, and I’m sure they are of interest to various people who want their init system to be SYSV on steroids. Here are some of them:

  • Interfacing via D-BUS
  • Shell-free bootup
  • Modular C coded early boot services included
  • Socket-based Activation
  • Socket-based Activation: inetd compatibility
  • Mount handling
  • fsck handling
  • Quota handling
  • Automount handling
  • Swap handling
  • Encrypted hard disk handling (LUKS)
  • Infrastructure for creating, removing, cleaning up of temporary and volatile files
  • Save/restore random seed
  • Static loading of kernel modules

These are all things I don’t want built into my init system. To me, there are many good reasons that they have been traditionally handled using simple, easy to edit and modify scripts, and that’s where I personally feel they should belong. In my mind, some don’t even make sense to build directly into the init system itself, such as automounting and the like (that belongs in autofs and friends). There’s more, but the main point I want to make here is that when you come up with a list of comparisons, that list should not really be an inverted list of features of the replacement (which obviously may not be in what is being replaced). A better comparison would be user experience. If I’m an admin, all of the new features are nice, but do I need to change my workflow for the new tool? And at the end of the day, what am I winning overall in terms of experience?

I’m not one of those who actually wanted YAIS (Yet Another Init System). No offence particularly to systemd, but I preferred good old fashioned sysvinit. It worked for longer time than many people have been using Linux (or UNIX), it was well understood, and well documented. It was far from perfect, but it got system services started. I can’t remember ever yelling at SYSV init and saying “wow, if only you weren’t so crappy, if only you started every service when I connected to it”. In fact, it was a mature tool that did everything I needed it to. It took a little longer to boot my system than it might, but then like most real users, I use suspend and other features that mean I boot from scratch infrequently, or I run servers where I really don’t care at all. I wouldn’t have cared if it took 10 minutes to boot my laptop, or an hour to boot my server…well, I exaggerate, but you get the point. And inetd? Or xinetd? Automount? Good enough for my uses as separate tools.

Jon.

On desktop re-invention

Sunday, April 24th, 2011

If you live in, or you’ve visited the United States, you might be struck by how product packaging and design changes little over the years (or perhaps you’re so familiar with this, you haven’t seen it). That bar of candy? It probably has a very similar label to the one it had twenty or more years ago. That top loading washing machine design? Still in widespread use today because it was the design that first gained market traction.

Europeans don’t always understand this, but Americans are generally very conservative in their adoption of iterations of existing technologies. It’s not just Americans, but let’s take this example for the moment. When a new technology first comes along, it’s open reign – do whatever you like – but as it gains widespread adoption, the market resists more than a certain level of change. People become familiar and comfortable with a certain mode of operation and expect that to remain consistent from one year (or decade) to the next, until the Next Big Thing. We saw this with the introduction of every big technology over the last 50-100 years.

In the technology space, we can observe how the PC has become a very popular, well established platform. This hasn’t happened only because Microsoft are somehow “evil”. It’s happened through consistency and standardization (even if it’s not an Open standard, it’s still a de facto standard). You can learn how to use a Windows or Mac system once, and apply the same concepts from one year (or decade) to the next. With the advent of tablets, and smartphones, there’s an opportunity to start afresh, but neither Apple nor Google are going to massively change the fundamentals of the user experience in their mobile platforms at this point. That’s not to say they can’t innovate, but they can’t break suddenly with the established customer expectations.

Like it or not, you get one chance to do this right before traction sets in and certain expectations are created. Ignore this at your own cost. Microsoft might love to fix all of the problems with the Windows experience (they are actually not entirely stupid), but they can’t do that now without alienating their established user base in the process.

My opinion is that GNOME 3 made a fundamental mistake in breaking with tradition. Innovation on that scale should target new less well established platforms, such as netbooks, tablets, and the like. Places where there’s still opportunity to define the Next Big Thing. Innovate with the new, don’t break with decades of established user experience on the old.

Thus concludes consumer behavior 101. You don’t have to like it, but you do have to live with it. And though you can certainly flame me for saying this, the reality is that this is the reason next year will be “the year of the Linux desktop”, and the year after, and the year after…repeat until the wheel re-invention exercise stops.

Jon.

On switching to KDE/Xfce

Sunday, March 13th, 2011

So call me old fashioned, but I don’t like the direction being taken by modern “User Experience” design. To me, GNOME Shell provides an experience that I am supposed to love, but it doesn’t empower me to make changes to that experience according to my existing use practices when that given experience inevitably falls short of my own personal preference. Perhaps I’m just “wrong” and I should be doing everything differently, but I suspect like some other users, my reaction to this enforced pattern of use (a trend that has been a long time coming with GNOME) is to be driven away from GNOME as a desktop environment of choice, and toward something else. What “something else” is a very good question.

Unlike with the panel, in GNOME 3, I can no longer choose to have the clock where I want it, remove some of the unnecessary icons, or even add weather applets and information to the screen. At the same time, I am supposed to believe everything is now an “Activity” with a single menu button being used to drive everything I do, rather than various shortcuts and icons around the screen. I can’t even have desktop icons or launch a terminal via the right click menu (which now doesn’t exist in the default setup). I’m also not at all fond of the effects, or the new window manager. In fact, where GNOME 2.x did almost everything I wanted, it seems that GNOME 3 does the opposite. Where it used to be about productivity, it’s now about appearance and effects, at the cost of more experienced users.

So I find myself being a reluctant “convert” to KDE and Xfce in the past few days. I don’t want to switch, but I can’t stick with the new GNOME 3 desktop either. I like a lot of the GNOME applications, I like the libraries, and I plan to continue to use them. But at the same time, KDE and Xfce give me a more familiar look and feel (after a lot of tweaking to be made to look just like the GNOME 2 desktop it replaces). I’m going to give Xfce a go on my rawhide netbook for a while and see if it can be my upgrade path elsewhere, too. If not, I shall try KDE some more, etc. I did try xmonad but I do actually want a “desktop” environment. I just don’t want an environment that seems tailored for netbooks and novices rather than experienced veterans of UNIX and Linux.

Don’t think I’m happy with this, because I’m not. But I have tried the alphas, the betas, the test images, and I have watched things head in a direction I just can’t agree with on a personal use level. It’s a sad day for me because I’ve been using GNOME for a decade. I still have GNOME 2.x installed on many other systems, but it seems that its days are numbered, and it, too, will need to be replaced.

Jon.