Red Hat Enterprise Linux Server for ARM Development Preview

June 22nd, 2015

http://www.redhat.com/en/about/blog/long-arm-linux-red-hat-enterprise-linux-server-arm-development-preview

A few minutes ago, Red Hat announced Red Hat Enterprise Linux Server for ARM Development Preview 7.1. This is a 64-bit (only) Operating System targeting “AArch64″, which is the 64-bit ARM machine execution state. It’s intended to help build out the Red Hat story within the ARM server ecosystem, allowing partners to port their applications and for ISVs to engage with the same trusted Operating System stack that they’ve worked with for many years. It is not a supported Operating System – so you can’t call up for support at this time – but you can use it to port your software to run on ARM servers. And you get all of the Red Hat goodness you would expect, from installation (all of the usual automation using Kickstart and tooling built upon that), through runtime management, and diagnostics (we have a fully functional version of UEFI-based kexec/kdump working with full crashdump support).

In short, everything you would expect from a 64-bit clean, Enterprise-quality Operating System. There are no shortcuts in RHELSA DP. We build one kernel (3.19 based in this release) that boots and runs exactly as you would expect on any other architecture. And we went the extra mile to make sure all of the tools users are familiar with will “just work”, right down to the level of assisting in bringing SMBIOS 3 to the ARM Architecture and in helping to port tools such as “dmidecode” (especially working with vendors to ensure their firmware tables are populated correctly) that users and scripts widely expect to use in discovering information for systems management. Our support of industry standards – such as ECAM-based PCIe enumeration using ACPI – means that plug in adapter cards also “just work”. And we’re not done. We’re poking all of the vendors to upstream their drivers as a condition of ever getting code into anything we might do in the future. It’s part of our very ethos that ARM be about upstream code.

To learn more about what we announced today in Red Hat Enterprise Linux Server for ARM Development Preview, you can come to our session at Red Hat Summit (this Wednesday, 10:40am). It’s hosted by myself (I can now of course share publicly that I am technical lead for RHELSA DP), and my awesome colleague Yan Fisher (who leads our markeing efforts around RHELSA DP) to learn more about what we are announcing, and to see a live demo of Apache Spark doing some analytics. You can also see many other hardware platforms that are built to comply with these industry standards running our Operating System at Summit (some with fresh and shiny new firmware upgrades that will migrate them over to support these emerging ARM industry standards). Stay tuned over the next few days for some exciting news and developments!

A little more history

We didn’t get here overnight. It’s taken years of effort by many many people to port our Operating System, infrastructure, tooling, and maintainers over to the 64-bit ARM Architecture, and I am very proud of each and every one of the people who have been involved. What started out as a small skunkworks team in the “Yosemite Project” has become integrated with the whole, and now nearly everyone has more than a little ARM expertise and exposure. With that in mind, I figure I can finally share a little history behind this project and how we got from there to today’s news.

The ARM project, originally known as “Yosemite” (it’s my favorite US National Park), began life 4.5 years ago in a meeting I had with one of my execs. At the time, 32-bit ARMv7 (with hardware floating point!) was the new new shiny, and we were super excited by PandaBoards and BeagleBones (not even -xM, but the original one). It took a stretch of the imagination to go from these tiny embedded engineering boards to a full multi-blade server system, but to give my management the credit they fully deserve, they have rarely been shy about thinking to the future and to newly emerging technologies.

Yet even at the time, we knew that our real interest was in the (as yet unannounced) rumored-to-exist 64-bit ARM Architecture (which is where our interest lies as far as Enterprise focused server systems). Indeed, in my first ever email after creating our internal ARM engineering list, I said the following on the topic of 64-bit ARM:

* The 64-million-dollar question is 64-bit ARM support. I’ve heard various mumblings from contacts, and of course there has been some press on this front. We are going to reach out to ARM to see what we can find out. *If* there will be a 64-bit architecture in the medium term, this may instead become the target of interest to Red Hat (where we care mostly about ARM for things like low power server blades), while Fedora would obviously retain both ARMv5 and ARMv7 in any case. This is a high priority item in the short term to find out this information.

Needless to say, we ultimately did discover that there was, in fact a 64-bit version of the architecture under development, and therein began a beautiful friendship that has existed with the ARM Architecture team for many years (I just love those guys to pieces). We began working with the early silicon vendors from the very beginning. Over the years, I’ve personally had some wild and whacky adventures assisting in everything from silicon bringup and debug (with many vendors) to design reviews for multiple future generations that will dazzle us in the years to come. There’s some truly awesome stuff coming that will bring us lots of fun toys to play with in future years.

When we began our team, we quickly got engaged with the Fedora ARM project, helping to bootstrap the “armv7hl” 32-bit architecture. This served multiple purposes – it helped us to do the right thing (working through Fedora to get what ultimately benefits everyone) and to learn how to bootstrap a new ARM Architecture from scratch once we got to the 64-bit version we knew was in flight. That same very first email from me to our internal engineering team also contained these important words on the subject of Fedora:

* We obviously want to work with the Fedora ARM community that already exists, and not appear to come in to take over. With that in mind, there have already been discussions with Chris Tyler and Paul Whalen that most (or all) of you have been involved in. Our goal is two-fold here: to help improve the Fedora ARM experience, and to be as un-intrusive as possible in our internal efforts at Red Hat engineering. We will come up with a list of priority goals for areas we are best able to help on.

I would like to think we have done our best to live up to our own standards over the past few years as far as doing right by the communities in which we operate (and it is great to see folks like Peter Robinson and Paul Whalen continuing to drive Fedora ARM forward). Looking back on those heady days, I can also amuse and ridicule myself with a number of other thoughts I shared on ways to build ARM servers over the next 5 years:

* we will need to work on questions such as:
- Choosing a kernel tree (upstream, OMAP, Hybrid we make, etc.) and building a single, generic 32-bit ARM binary kernel image. Although ARM traditionally has various machine/platform logic, I feel that the newer stuff like Grant Likely’s Flattened Device Tree work is essential to making a supportable Enterprise kernel. We will need to work with vendors to see what they are doing on the “BIOS”/platform front, and hopefully can get behind something like FDT.

So you can see that I wasn’t always the UEFI/ACPI “fanboy” that I became. Indeed, I have never been wedded to any one technology. I have, instead, been “wedded” to the successful outcome for the overall computing industry in which ARM is a part of a vibrant range of consumer choices. After many early conversations (more than 4 years ago now) with a wide variety of industry players, it became clear to me that the way for ARM to succeed in server (in having a standard platform against which a single Enterprise quality Operating System image could be constructed, and used in such a manner that was highly familiar to those deploying on ARM) was through bringing all of the hardware vendors together and agreeing upon some common standards that could help us succeed together. Which is what we did. I think the end result is a good one for the success of the ARM partnership, which is what matters in the end.

As a result of the work that has been done over the past few years, we have now been able to build an Operating System that feels just like other Enterprise Operating Systems with which many of you are very familiar. And, indeed, many of you have been running RHELSA DP in various incarnations during development. This includes our good friends at Linaro, who have provided much of the platform engineering work required to support emerging ARM platform standards with a single OS kernel image. Linaro have been instrumental in so many ways in bringing the entire Linux-on-ARM emerging server ecosystem together and we look forward to many more wonderful years working on ARM servers together.

RHELSA DP targets industry standards that we have helped to drive for the past few years, including the ARM SBSA (Server Base System Architecture), and the ARM SBBR (Server Base Boot Requirements). These will collectively allow for a single 64-bit ARM Enterprise server Operating System image that supports the full range of compliant systems out of the box (as well as many future systems that have yet to be released through minor driver updates). This is fantastic news for both the emerging ARM server ecosystem, as well as for the Red Hat family overall. It’s only going to get more exciting over time. There are so many others whose designs we have assisted in developing over the past few years that I look forward to seeing come to market as this ecosystem matures with time.

Today I am proud to share Red Hat Enterprise Linux Server for ARM Development Preview with our many wonderful partners. I look forward to working with all of those who want to join us in building an open and standards based ecosystem of awesomeness. Now is a great time to reach out to myself and the team to learn more about how to get involved. See you at Red Hat Summit!

On the future of the computing industry (part 1)

January 2nd, 2014

Over the course of the next few months, I will write much about where I see the industry heading in 2014, 2015, and over the course of the coming decades. This first post is about the move to verticalized solutions, but at the same time the potential for a truly Open commoditized Cloud computing platform of the future.

The world I see ahead is a future of inevitable, unpreventable verticalization, which can be steered (by a few good men and women) to retain an Open (enough) software platform. It’s not all about ARM. But let’s take ARM as an example (and only an example here). For a “few” million dollars, I can license an architecture and SoC component IP sufficient to build my own “Server-on-Chip” style design integrating all of the features that I want on-die and/or on-package. For a bit more, I can license the architecture itself and go build it myself. “It” isn’t a Computer Architecture. “It” is a Hyperscale server SoC exploiting integration advances to do everything on-chip that we used to build in giant boxes filled with air. There are plenty enough people out there you can hire to go do this. Some will succeed, other will fail, but the minds are available on the market today.

All of this integration is possible because Moore’s Law said we would reach this point by now (you think Moore’s Law is all about faster and faster because you’ve been drinking the wrong koolaid for years, it’s actually about circuit density). Meanwhile, his friend Dennard tells us that the traditional vendors have been fighting a war of MHz and building cathedrals that won’t scale as they try ever more cleaver tricks. My favorite quote on the matter comes from AMD’s Andrew Feldman, “you’re using a Space Shuttle to go to a Grocery Store!”. What we have is “good enough”. And the Innovator’s Dilemma tells us the rest. The future isn’t about architecture X vs. architecture Y. It’s about energy, integrated fabrics, Hyperscale designs combing good enough compute performance at obscene levels of density fueling the scale we need for tomorrow. Take a look around at the industry and see where some of the leading minds in Computer Architecture are landing (hint: use your eyes and ears) and you’ll see that this train of verticalization has left the station, and it won’t be returning. Those vendors you like today? There will be 20 more of them tomorrow.

Done right, we reach a point in about ten years from now where computing becomes a simple utility. Amazon spot pricing move over. In fact, Cloud Computing as we know it today is totally nonsensical drivel. In the future, units of computation are standardized on some level to the point that they are traded on open markets as commodities, with speculators trading on futures in much the same way that they do on crops and other commodities today (I believe this so strongly that I preemptively filed patents in this area several years ago). Workloads dynamically move around the world in response to many stimuli (instantaneous pricing, weather, energy availability, economic, security and political concerns, etc.). Nobody will pay for their Operating System per-se, but they will pay for complete solutions that provide all of the plumbing necessary to build the new “Cloud” (I hate that term) of tomorrow. And the company (or individuals) who build the technology that can power the exchanges and commoditized computing of tomorrow will be the ones cashing in at the end.

The coming decade will see also the rise of heavily integrated hardware and software solutions. As I noted above, these days “anyone” can build their own custom SoC. And many will. Many of these will follow the Apple model, building walled gardens running their own hardware, own firmware, own Operating System. So “done wrong” the future becomes a scary Apple-move-over Dystopia in which we long for the “good old days” of the Unix Wars when vendors produced such “compatible” systems. Some of the really big boys have all of the incentive in the world to go build these walled gardens, and we have very little time to steer them right.

libkmod replaces module-init-tools

December 20th, 2011

UPDATE: For more information, consider joining #kmod on Freenode. Development is using the existing linux-modules@vger.kernel.org mailing list.

The team at ProFUSION (and other helpful contributors) have done an awesome job at quickly turning the Plumber’s Wishlist for Linux kernel module loading library item into reality. libkmod is linkable into udev, will speed up module loading, and has a stated goal of remaining backward compatible with the existing behaviors already present within module-init-tools. Therefore, the average user should notice nothing other than an improved in module load times in switching to the replacement library. Those features not yet present in libkmod will be added over the coming weeks. The new library could do with some testing on non-x86, bi-endian, and will need some further thoughts around index cacheing (e.g. within long-lived processes), but is ready enough for wider use. To find out more about the library, visit the initial blog posting from the ProFUSION team:

http://www.politreco.com/2011/12/announce-kmod-1/

Jon.

On Citizen Journalism

November 30th, 2011

We are entering a very dark and dangerous time for humanity. The rise of social media and the mediocre web (in which everyone’s voice, no matter how uninformed, is equal) can be a very positive change for good. Connecting people in far flung parts of the world allows “iReports”, leaks, government suppression, and many other issues to come to light. But at the same time, those who seek the utter demise of traditional media represent some of the most uninformed malignants who will cause great harm to our country, and to the wider world at large.

Traditional news media, like the New York Times is under constant threat from those who seek its destruction and replacement with mindless crap written in 140 characters or less. Regurgitated opinion of the collective Tweeters of the world will not create media outlets in war zones, or fund researchers to trawl through years of government records. Wikileaks alone will not displace the need for professionally, carefully presented (fair) treatment of horribly offensive abuses of the governments of the world. RSS aggregation of news media and proliferation of links online has been phenomenal in disseminating news and readers such as those available from Google (and others) have presented it well. But all of these news stories ultimately come from somewhere real, somewhere tangible, somewhere less Web 2.0 and more “real world 1.0″. Take the Times (and a few others) out of the picture and you’ll quickly notice the dearth of good quality news sources available for others to regurgitate.

This is why I have two subscriptions to the New York Times. I pay for my quality journalism, and I pay double (or many times more) what some others pay because I care that the United States paper of record remain in business. Those of us who care must band together to disrupt and undermine others who seek to destroy quality journalism and replace it with mediocre populist nonsense of the kind favored by contestants on Reality TV shows. Is this elitist? Absolutely. It is absolutely the case that most people don’t care about the minitia reported in the Times, about the investigative undercover stories, and about the analysis that goes into them. Most care more about what some famous moron said today or which YouTube video is hot. And that’s ok. Let them eat cake, and let them enjoy it too. But don’t take away quality news from those of us who are interested in knowing what’s really going on in the world.

Jon.

Spotify desktop app

October 3rd, 2011

I bought a Spotify subscription recently. I like the concept, and the Android app is just about usable (though not an Apple-level application at this point). What is really driving me nuts is that, if you fall into the trap of registering with your Facebook account (they present it as a single-sign-on option, but really it’s to push the integration), Spotify goes into a special obnoxious mode wherein it insists that you always have the app installed in your Facebook account. Changing permissions on the app or removing its ability to post to your account only invites an error – especially in the second case, wherein it will bug you *every* time you play a track that you don’t have it in your timeline. Do my friends really care /that/ much about what I’m listening to that they can’t choose to follow my last.fm and leave it at that?

I’ve tried complaining to Spotify, asking how to switch my account to the non-Facebook mode (that hopefully just plays songs, like I paid for). I have heard nothing yet. My next recourse will be to complain to Facebook that Spotify have an app that is malicious and should be removed from the site. I suspect that would then get a customer service reply from Spotify. Not my preferred means to make contact and get this fixed, but certainly an option.

Jon.

City of Boston parking failure

October 3rd, 2011

So my girlfriend had a couple of outstanding parking tickets (actually, not her tickets or mine, but that’s a long story) and her car got booted. Excessive, but ok. What’s not ok is that they did this last thing on a Friday afternoon (4:40pm), right after their office closed for the week at 4:30pm, then gave her two tickets for failing to move her car over the weekend.

This kind of thing happens because busybodies run around generating revenue the City is too scared of generating using alternatively sane means (by increasing taxes) and so it has gotten the parking situation out of control. It’s ludicrous to hold someone’s car hostage and then charge them for failing to move that car without any third option. This isn’t the first thing Boston has done to annoy me along these lines.

It’s important to realize cities like Boston only understand things that impact tax revenues. Moving to Boston next year is very unlikely as a result – they don’t need to generate revenue from me, they’re waving neon signs saying “we’re unreasonable, don’t live here”.

Jon.

On standards – state car inspections

October 1st, 2011

So I was waiting this afternoon for my annual Massachusetts State Safety and Emissions test. This is mandated by this state, as well as most others. The precise details of the test vary, but the mechanics are identical, using an industry (and government) standardized connector and protocol, OBD-II. Thanks to standards, consumers don’t have the following little scenario that played out in my head as I was waiting:

consumer: “I’m here to get my car inspected”
mechanic: “ok, which model car do you have?”
consumer: “The frobulator 9000, second edition, build number 29785, release 27, from yesterday”
mechanic: “ah, yes, I remember it well. Unfortunately, that’s ancient history at this point. Yea. Last night, we got this awesome idea that we’d rewrite the whole thing…but don’t worry, in a few years it’s gonna be awesome!”
consumer: “dude, I just want my car inspected…”
etc.

This is a scenario that plays out all too often in the Linux community. Not ubiquitously. There are many of us who understand the true value of longevity, standards, and consumer demand. But there are also many who are losing sight of how consumers actually work, and what they actually want. What they want is not a moving target, they want rigid “just works and I don’t care” as their modus operandi. Let’s hope we can get more of our very own OBD-II standards, defined as an entire industry through pragmatic agreement between everyone involved.

Jon.