Search
Close this search box.

Month: December 2014

Fin

Note: This is the least useful article I’ve ever written.  If you haven’t read my technical articles, do yourself a favour: skip this and read those other articles instead.  There are 27,492 vaguely (or not so vaguely) angsty, bitter, pseudo-introspective articles published to Hacker News every day; there’s barely one marginally useful technical article a week.  I’ve just linked you a month’s worth and change.  Trust me on this.  But of course, you won’t, which in part is why we’re here.  On with it, then.

After 2 moderately successful careers spanning over 17 years, I’m retiring today.

Retirement does not mean idleness; my small ranch in North Idaho needs enough work to keep me busy until even the Social Security Administration, in its wisdom, thinks it fit that I retire (which statistically speaking would offer approximately 3 weeks in which to move to Arizona and learn to hate golf before beginning one’s terminal career as a font of Medicare reimbursement chits).  There are ditches to dig, outbuildings to erect, trees to fell, slash to burn, livestock to husband, fences to string, game to hunt, and garden plots to dig, plant, and harvest, to say nothing of the ordinary joys of home ownership and the added labours of independent off-grid living.  In the unlikely event that I ever “finish” all of that, there is no end of work to be done restoring, conserving, and improving open-pollinated plant varieties and heritage livestock breeds, the shockingly depleted remnants of our 12,000-year-old agricultural endowment and the alternative to patented genetically-modified monocultures.  I look forward to doing my part for those efforts, and I will not be bored.  There’s plenty more to be said about the positive reasons I’m hanging it up and what’s next, but they’re off-topic in this venue and for this audience, so I’ll stop there.  If you’re interested — and if you consume food or water, or know someone who does, you should be — there is no shortage of resources out there with which to educate yourself on these topics.

Not Why I’m Retiring

I did not win the lottery, literally or figuratively.  While my first career did include the tail end of the dot-com boom (the VERY tail end: the first post-layoff all-hands I attended was four months after I was hired out of college; 11 more followed in a crushing depression that made 2008 look like the go-go 80s), I’m not retiring because I struck it rich.  None of the companies I’ve ever worked for has made the kind of splashy exit that has the Facebook kids buying up Caribbean islands on which to store their less-favoured Lotuses.  The first two have vanished into the dustbin of history, the third was sold to its customers for peanuts (then apparently sold again), the fourth was sold by raging assholes to another raging asshole for an insignificant fraction of its worth, and the fifth is about to be taken private for considerably less than the market thought it was worth when I joined for a cup of coffee.  The jury’s still out on Joyent, but anyone who figures to retire on their winnings from the tech industry options lotto may as well just buy tickets in the real one and enjoy the advantage of being fully vested upon leaving the bodega.  That’s never been me.  I’m thankful that as a practitioner of one of the very few remaining trades in the industrialised world to offer a degree of economic independence our grandparents would have called “middle-class”, I had the opportunity to choose this path and make it a reality.  Beyond that, luck played no role.

Why I’m Retiring

It’s important not to understate the positives; I’ve achieved a major goal years in the planning and couldn’t be more excited by what’s next.  At the same time, I don’t think bucolic country living is terribly interesting to most of the admittedly small potential audience at this venue.  So instead I’ll talk about the problems with the industry and trade I’m leaving behind.  Maybe, with some luck, a few of you will drain this festering cesspool and rebuild on it something worth caring about.  Until then, it’s safe to say I’m running away from it as much as toward something else.  Most big moves are like that; it’s never one thing.

At the most basic level, the entire industry (like, I suspect, most others) plays to employees’ desire to keep their jobs.  Whether the existence of those jobs and the manner in which they are presently filled are in the best interests of the stockholders is irrelevant.  A very cozy industry has resulted.  Real change is slow, creative destruction slower still.  Artificial network effects arise, in which employers want people experienced with the currently perceived “winning” technology and job-seekers want to list the currently perceived “winning” technology among their experience.  Whether that technology is actually superior or even appropriate to the task at hand is at most an afterthought.  Vendors line up behind whatever technology or company the hype wheel of fortune has landed on that year.  Products are built with obvious and easily corrected shortcomings that exist to preserve “partnerships” with other vendors whose products exist solely to remedy them.  Backs are scratched all around.

Worse, from the perspective of someone who appreciates downstack problems and the value created by solving them, is that the de facto standard technology stack is ossifying upward.  As one looks at each progressively lower layer, the industry’s collective willingness to contemplate, much less sponsor, work at that layer diminishes.  Interest among potential partners in Dogpatch was nonexistent.  Joyent is choosing not to pursue it in favour of moving upstack.  There is no real interest anywhere in alternatives to the PC, even though the basic architecture was developed for use cases that look nothing like the ones that are most relevant today.  Every server still ships with enormous blobs of firmware so deeply proprietary that even the specifications required to write it come from Intel in binders with red covers reminding you that absolutely no disclosure, implicit or explicit, of the contents is permitted to anyone, ever.  So much for the Open Source revolution.  Every server still comes with a VGA port, and in most cases still has tasks that cannot be performed without using it.  No “Unix server” vendor would even have tried to sell such a machine 20 years ago.  The few timid efforts to improve hardware, such as OpenCompute, do nothing to address any of these problems, declining to address hardware architecture or firmware at all other than to specify that the existing de facto standards be followed.  At most, OpenCompute is a form factor play; it’s about bending sheet metal for slightly greater density, not making a better system from a management, scalability, or software interface perspective.  Meanwhile the most trumpeted “advance” of the last 10 years at the bottom of the stack is UEFI, which replaces parts of the system firmware… with a slightly modernised version of MS-DOS.  It’s painfully obvious that the sole purpose of UEFI is to enable Microsoft to continue collecting royalties on every computer sold, a brilliant move on their part given the steady decline of Windows, but an abomination for everyone else.  UEFI solves no problems for the operator, customer, or OS vendor.  If anything, it creates more of them.  There’s a better way to do this, but my central observation is that the solutions that would be better for everyone else are not those that would be best for the vendors: AMI, Microsoft, and Intel are quite happy with their cozy little proprietary royalty machine and have no incentive to engineer, or even enable others to engineer, anything better.  The bottom of the stack is designed to serve vendors, not customers.

The state of reality at the OS layer is almost as bad.  World-class jackass Rob Pike insists that OS research is dead.  As an observation, it’s deliberately ignorant; as an assertion, it’s abominable.  Mr. Pike and his ilk seem to believe that the way to improve the operating system is to throw it away and write a new one based on aesthetic appraisal of the previously-completed one.  He spent years dithering around with an academic second system while the rest of the world went out and ran Unix in production and learned the lessons required to build a better system (be it a better Unix or something else entirely).  Unsurprisingly, Plan9 is a colossal failure, and one look at the golang toolchain will satisfy even the casual observer as to why; it was clearly written by people who wasted the last 30 years.  There are plenty of other problems with Mr. Pike and his language runtime, but I’ve covered those elsewhere.  The salient aspect is that the industry, collectively, is excited about his work but not about anything I consider useful, well-done, or worthwhile.  Looking at the development of operating systems that have learned the lessons of life in production, what’s really going on?  More than you think, but not where you’re looking for it.  The overwhelmingly dominant operating systems in use today are GNU/Linux (servers and mobile devices) and Microsoft Windows (all the legacy systems that aren’t mainframes).  Neither has any of the features you need to manage large-scale deployments, and no one is doing anything about that.  Microsoft, to its credit, has found a cash cow and clearly intends to milk it dry.  There’s no shame in that, but it hardly advances the state of the art; their OS is actually well-suited to its niche on corporate desktops and doesn’t need to advance.  GNU/Linux has been written by people with no sense of smell, no concept of architecture, and no ability to advance any large-scale piece of work.  When they do try something big, it’s never the right thing; instead, it’s systemd, written by another world-class jackass.  And even if systemd were properly scoped and executed well, it would be only a modest improvement on SMF… which has been running in production on illumos for 10 years.  GNU/Linux is definitely not the place to look for exciting OS work.

The need for OS development and innovation is extreme.  A great deal of interesting work is still being done in the illumos community, and elsewhere as well in several instances.  But there seems to be little interest in using anything that’s not GNU/Linux.  We come back to the basic principle driving everything: people only care about keeping their jobs.  GNU/Linux is a trash fire, but it’s the de facto standard trash fire, just like Microsoft was in the 90s and IBM in the 70s.  If you choose it and your project fails, it must have been the application developers’ fault; if you choose illumos and your project fails, it must be the OS’s — and thus YOUR — fault.  Never mind that illumos is better.  Never mind that it offers you better ways to debug, observe, and improve your (let’s face it, buggy as sin) application, or that it is designed and built for data centre deployment rather than desktop use.  Your application developers won’t bother learning how to do any of that anyway, and they in turn probably resent the very presence of better tooling.  After all, if they can’t explain why their software doesn’t work, it’s much better for them (they want to keep their jobs too) if they can blame shoddy or missing tooling.  Considering how thin the qualifications to be an application developer are today, I can’t say I blame them.  Most people in that role are hopelessly out of their depth, and better tooling only helps people who understand how computers work to begin with.

The net result of all this is that we have data centres occupying many hectares, filled with computers that are architecturally identical to a Packard Bell 486 desktop running MS-DOS long enough to boot a crippled and amateurish clone of Unix circa 1987, and an endless array of complex, kludged-up hardware and software components intended to hide all of that.  I would ask for a show of hands from those who consider this progress, but too many are happily riding the gravy train this abomination enables.

Where Have All the Systems Companies Gone?

When I joined Joyent nearly 3 years ago, our then-CTO Jason Hoffman insisted that I was joining a systems company.  While he was oft given to aspirational assertions, if nothing else it was a good aspiration to have.  One could readily imagine stamping out turnkey data centre building blocks, racks of carefully selected — or even engineered! — hardware managed cradle-to-grave by SDC and SmartOS, an advanced Unix variant tailored for the data centre.  Part of this vision has been fulfilled; the software stack has been in production on commodity hardware for years.  Realising the rest of this exciting concept that still has no serious competition (probably because only a systems company would have the necessary array of technologies and the vision to combine them this way) requires a great deal of systems work: hardware, firmware, OS, and orchestration.  The pieces all have to fit together; there is no place for impedance mismatches at 10,000-server scale.  Meanwhile, everyone wants to rave about OpenStack, a software particle-board designed by committee, consisting of floor sweepings and a lot of toxic glue.  Not surprisingly, since it’s not a proper system, few OpenStack deployments exceed a dozen nodes.  And in this, the world falls all over itself to “invest” billions of dollars.  Considering what Joyent did with a tiny fraction of that, it’s not hard to imagine the result if a single systems company were to invest even a tenth of it in building a coherent, opinionated, end-to-end system.  That’s the problem I was so excited to tackle, a logical extension of our work at Fishworks to distributed systems.

I’m not retiring because I’m angry with Joyent, and of course I wish my friends and colleagues there well, but it’s fair to say I’m disappointed that we never really went down that road.  I’d like to imagine that someone will, but I’ve seen no evidence for it.  The closest thing that exists in the world is the bespoke (and DEEPLY proprietary) work Google has done for its own systems.  Considering the “quality” of the open work they do, I can’t imagine they’ve solved the problem in a manner I’d appreciate, but the mere fact that there’s no opportunity to share it with the world and build a genuine technology company around it is sufficient to eliminate any thought of going there to work on it.  Perhaps someday there will once again be systems companies solving these problems.

And Thus

It gets difficult to come to work each day filled with bitterness and disappointment.  The industry is broken, or at least it feels broken to me.  If I hadn’t saved up a small nest egg, or if I had expensive tastes, or if there were no other way to occupy my time that seemed any better, I’d probably just do what most people do everywhere: keep showing up and going through the motions.  But that’s not the case.

It’s easy to assert sour grapes here.  I get that.  An alternative explanation is that everything is basically hunky-dory and I’m impossibly embittered by a lifetime of my own failure.  I accept the plausibility of either, and offer no counterargument should you choose the latter.  Fortunately, for me at least, either possibility dictates the same conclusion: it’s time to move on.  The ability to predict the future, even (perhaps especially) one’s own, is a claim rooted in hubris, so I can’t say for certain that I’ll never be employed again, but I can state without reservation that I don’t plan to be.  Maybe the industry will change for the better.  Maybe I will.  Maybe something else exciting will come along of which I want to be a part.  Maybe not.

Thank You

I’m not going to do the thing where I try to name everyone I’ve enjoyed working with, or even everyone who has helped me out over the years.  If we’ve worked together and we’d work together again, you know who you are.  Thanks for everything, and my best to you all.  May you build something great.

In the process of working on getting cgo to work on illumos, I learned that golang is trash.  I should mention that I have no real opinion on the language itself; I’m not a languages person (C will do, thanks), and that’s not the part of the system I have experience with anyway.  Rather, the runtime and toolchain are the parts on which I wish to comment.

Fundamentally, it’s apparent that gccgo was always the right answer.  The amount of duplication of effort in the toolchain is staggering.  Instead of creating (or borrowing from Plan9) an “assembly language” with its own assembler, “C” compiler (but it’s not really C), and an entire “linker” (that’s not really a linker nor a link-editor but does a bunch of other stuff), it would have been much better to simply reuse what already exists.  While that would have been true anyway, a look at the quality of the code involved makes it even clearer.  For example, the “linker” is extremely crude and is incapable of handling many common link-editing tasks such as mapfile processing, .dynamic manipulation, and even in some cases simply linking archive libraries containing objects with undefined external references.  There’s no great shame in that if it’s 1980 and we don’t already have full-featured, fairly well debugged link-editors, but we do.  Use them.

But I think the bit that really captures the essence of golang, as well as the psuedointellectual arrogance of Rob Pike and everything he stands for, is this little gem:

Instructions, registers, and assembler directives are always in UPPER CASE to remind you that assembly programming is a fraught endeavor.

Wait, what?  Are you being paternalistic or are you just an amateur?  Writing in normal (that is, adult) assembly language is not fraught at all.  While Mr. Pike was busying himself with Plan9, the rest of us were establishing ABIs, writing thorough processor manuals, and creating good tools that make writing and debugging assembly no more difficult (if still somewhat slower) than C.  That said, however, writing in the Fisher-Price “assembly language” that golang uses may very well be a fraught endeavor.  For starters, there’s this little problem:

The most important thing to know about Go’s assembler is that it is not a direct representation of the underlying machine.

Um, ok.  So what you’re telling me is that this is actually not assembly language at all but some intermediate compiler representation you’ve invented.  That’s perfectly acceptable, but there’s a good reason that libgcc isn’t written in RTL.  It gets better, though: if you’re going to have an intermediate representation, you’d think you’d want it to be both convenient for the tools to consume and sufficiently distinct from anything else that no human could possibly confuse it with any other representation, right?  Not if you’re working on Plan9!  Without the benefit of decades of lessons learned running Unix in production (because Unix is terrible and why would anyone want that instead of Plan9?), apparently such obvious thoughts never occurred to them, because the intermediate representation is almost a dead ringer for amd64 assembly!  For example:

TEXT runtime·munmap(SB),NOSPLIT,$0
    MOVQ addr+0(FP), DI // arg 1 addr
    MOVQ n+8(FP), SI // arg 2 len
    MOVL $73, AX
    SYSCALL
    JCC 2(PC)
    MOVL $0xf1, 0xf1 // crash
    RET

This is a classic product of technical hubris: it borrows enough from adult x86 assembly to seem familiar to someone knowledgeable in the field, but has enough pointless differences to be confusing.  Clearly these are well-known instructions, with the familiar addressing modes and general syntax.  But wait: DI is a register?  How is that distinct from the memory location referred to by the symbol DI?  Presumably these registers simply become reserved words and all actual variables must be lower-case.  That would be fine if not for the fact that the author of a module does not own the namespace of symbols in other objects with which he may need to link his.  What then?  Oh, of course; I need to use the fake SB register for that, just like I don’t in any real assembly language.  But it gets worse: what’s FP?  Your natural assumption, knowing the ABI as one should, is that it’s a genericised reference to %rbp, the conventional frame pointer.  WRONG!  The documentation instead refers to it as a “virtual frame pointer”; in fact, rbp is usually used by the compiler as a general register, just as if you had foolishly built your code with gcc using -fomit-frame-pointer.  Thanks, guys: confusing and undebuggable!  We could go on, detailing the pointless divergence from actual x86 assembly and the failure to genuinely abstract registers and instructions in a way that would allow this “intermediate representation” to be generic across ISAs, but I think by this point it’s plain enough that this entire chunk of the toolchain is simply rubbish.  The real toolchains everyone else uses were not invented at Lucent nor Google, so obviously they needed their own, written in seclusion with all the benefits of a 1980 worldview.

The last fun bit I wish to discuss is that funny little character between “runtime” and “munmap” in our previous example.  You see, despite having written their own entire toolchain (including a compiler identifying itself as accepting C that does no such thing), the authors decided that the normal “.” character was simply too special to be repurposed in source code.  Instead, it would retain its existing meaning as the customary dot operator.  But this means some other character will be needed to identify symbols that should have a dot in their names.  So obviously the natural choice here is some high Unicode character.  Obviously.  And equally obviously, when such code is compiled, the character is replaced in symbol names with an ordinary dot.  Of course!

It’s no surprise that the golang people want to replace the “C” parts of the runtime.  I would, too; the “C” language accepted by the Plan9 compiler is not really C.  The compiler has no concept of a function pointer being equivalent to the function itself, or for that matter such similarly obscure aspects of the C standard as the constant identifier NULL (instead of NULL, one must write “nil”, quite possibly the most obnoxiously spurious product of NIH thinking I have ever seen in my life).  But the problems with the golang toolchain and runtime go far beyond an idiosyncratic C dialect; the same thinking behind that oddity permeates the entire work.  Everything about the implementation of the language environment feels amateurish.  The best thing they could do at this point is start working on golang 2.0, with the intent to completely discard the entire toolchain and much of the runtime.  Rewriting more of the runtime in Go is fine, too, but it’s critical that the language and compilers be mature enough to enable bootstrapping in some sensible way (perhaps the way that gcc builds libgcc with the new compiler, not relying in any way on installed tooling to build the finished artifact).  There’s no obvious reason to turf the entire language, but reimplementing it sanely would be a huge benefit to their ecosystem.  Every system already has good tools for assembling and linking code, and those tools support ABIs that enable easy reuse of external software.  The Plan9 crowd needs to spend time appreciating why this is so instead of arrogantly ignoring it.  A sane implementation that leverages those tools would make the Go language far more attractive.

Recent Posts

Fin

December 29, 2014
December 29, 2014
February 20, 2014
December 28, 2013
June 25, 2013

Archives

Archives