Wednesday, December 12, 2007

Applying a Proprietary Paradigm to an Open System

I would like to start off by saying I am a big fan of the open development model for almost all software. Over the past several months however I have seen a lot of users, many of them long term FOSS users, try to apply the proprietary paradigm to open projects. This is most obvious with KDE 4.0, but I have seen it in a less dramatic way with other projects. The specific paradigm I speak of is the x.0 product ready for the masses paradigm. Everything I have seen of KDE 4.0 tells me that it will not be ready for the masses. That is not to say it won't be a good product, but it won't be ready for the masses. Even the people within the KDE development community have been saying on forums and in blog posts that they recommend people wait till 4.1 for a really complete user experience, and recommend distributions wait until 4.1 to include the code base. This hasn't stopped Kubuntu from including KDE 4.0 code before KDE is even released.

At issue is the need for community feedback. I have heard this from several developers, if they don't release something *.0 they don't get enough users to really find all the bugs and fix them. In a proprietary company there are large scale structured betas, and people who are paid specifically to try and break the software. In high school I was one of those people. I got a delightful $6.50 an hour back in the 90's to try and break an online curriculum for elementary school kids and do detailed bug reports. The problem we run into with FOSS software is that it's made by developers, and while we are starting to get "end users" with all the good and bad stigma that holds in our community, they are not the ones running betas. Mostly developers and hard core geeks are running betas, especially of products like KDE which if you run are going to take over your entire GUI experience. Developers and hard core geeks are by and large going to be more similar to the people who wrote the software than your average end user, and the whole point of a beta is to get a very different set of eyes on the software. This is still a problem with proprietary models, but it seems to be less so because so many people want the status of getting that early release of windows or office. I have met these people, they make no sense to me at all.

I do not know how to fix this problem. It's tied into a well established way people think about alphas and betas and RCs and the entire version numbering system and what certain number combinations mean. These preconceptions were established when proprietary software ruled all, and not all of those pre conceptions apply to FOSS. I do think we need to begin to move away from that numbering system in some way. Before we can get to that though is the real crux of what I wanted to talk about (sorry that first paragraph was setup). People in the community need to understand the dynamics of how FOSS is developed and that a *.0 for a community FOSS project isn't going to be the same thing as a *.0 proprietary or even corporate FOSS project. I have seen so many people frustrated and disappointed by what they are seeing in KDE 4.0. Many of them have been very "vocal" about this in forums. Some of them in constructive ways, some of them in flaming "KDE SUCKS SEE I TOLD YOU GNOME WAS BETTER" way. I have also seen KDE developers over and over talk about how they are accomplishing pretty much everything they want with 4.0. They wanted to do a complete overhaul of dead code, and attempt to make system which will prevent them from becoming dependent on specific technologies. They have established a wonderful working system that can be much more easily built on and developed from this point forward. They have not made a great desktop yet. It's not there, and more importantly that wasn't the point for 4.0. The point was to make it easier to move towards developing a great desktop. This dynamic is true for many large community open source projects, especially when they do a major overhaul like KDE has needed to, and it is important for those of use who promote FOSS to try to keep these dynamics in mind when discussing these projects.

I think this same problem is mirrored in many of the complaints about Ubuntu and their rapid release cycles. Their release cycles have caused them to move forward with usability features much faster than any other desktop. They develop at a rate which boggles the mind in many cases. It has also caused an incredible number of bugs. I have an HP dv6000z laptop. I installed Edgy Eft on it. I had a ACPI problem that was easily enough fixed by turning ACPI off at boot. Not a huge issue, but it needed to be addressed eventually. The headphone jack didn't work. Again a minor nuisance thatneeds to be addressed eventually. Finally it had a broadcom wireless card. Enough said about the wireless. So I figured based on experience with previous distros that within a couple releases I would be fairly good to go. A couple releases later I can't get Ubuntu to run at all. Feisty had a sound glitch that locked the whole system up as the first bit of the startup sound repeated over and over and over. I tried to recompile ALSA, I tried various fixes, all to no avail. Feedback about my laptop became harder and harder to find. That told me more and more people were just giving up on it. When the good fight is happening forum posts happen. If a laptop is trash for Linux you cease to hear about it. So Gutsy comes along, and the sound repeat is still there, and I can't get rid of it, no matter how much I re-compile ALSA, but it no longer locks up all of the X system. Also my desktop system where my wireless card worked perfectly with Feisty had to be moved to a different room so it could be plugged into my router to run Gutsy. I think everyone has heart about Gutsy's wireless woes. Ubuntu is the only system I know of that has these kinds of regressions. These are the sorts of regressions that are supposed to happen as you add new features to developer releases, and alpha/betas. Not when holding finished products up next to each other. Many non Ubuntu fan boys scream about these issues and point to them as why Ubuntu is filth and polluting the purity of linux. Here's the thing, these are not LTR editions. If you look at how long Apple and Microsoft take to release an OS, it is not every 6 months. Apple is closer than Microsoft, but they still don't release at that harried rate. It all comes back to getting eyes on the software because FOSS projects don't have the money to put into testing that Apple or Microsoft do. So they make releases. If Ubuntu takes the time to make sure their LTR cycle products have the kind of quality that a commercial release gets, and moves forward with the bleeding edge on their short term products then I have no complaints.

At issue is a matter of mind share and marketing. These are the largest issues for FOSS. No one is spending the money on marketing that they should be. We see the occasional linux ad from IBM or Novell, but when is the last time you saw a really good Red Hat ad. I never have. I've never even seen a really bad Red Hat ad. I've never seen an Ubuntu ad, or a Mandrake ad, or a Linspire Ad. It's time to start thinking about how people view our software, and to send messages that clearly communicate what we intend each release to be. Microsoft and Apple spend billions of dollars on this process. Unfortunately a lot of it is spin, smoke and mirrors. I would hope that FOSS organizations would take a more honest approach, but even something in the mold of the same old same old would be better than everyone just making assumptions based on ingrained paradigms that don't apply.

As a final disclaimer to the people who will inevitably flame me about Ubuntu good or bad. I don't know that Ubuntu is doing what I suggested. Their upcoming LTR release might be as flakey and slow and unfortunate as their in-between releases. I just want to give them the benefit of the doubt. For the fan boys, even with the flakeyness I still use Ubuntu because I love the interface and features.

11 comments:

Evgeny "nekr0z" Kuznetsov said...

You certainly have a valuable point there, Victor. But I can't agree completely, and I'll try to state my reasons, if you don't mind.

As for the billions of dollars that could be spent on marketing to the benefit of FOSS products: even if the companies like Canonical or Red Hat do have that huge sums to spend, which I doubt, they would, in my personal opinion, do much more for us users if they spent those on improving the products themselves instead. Looks like they do, too.

And as for the issues with *.0 releases... Well, I share your feelings about foul issues with hardware completely, but one thing comes to my mind on this matter: Vista. Well, I never used it, nor am I looking forward to, but from what I heard and read there were a lot of severe hardware and even software compatibility problems first when it arrived. Those problems are being solved relatively fast and completely but — here's the important statement — not by Microsoft itself. Those were the hardware vendors that were supposed to solve all the driver issues, and so they did! They were forced to. By users. Some Joe User buys a printer, tries it with Vista, finds that it doesn't work, and whom does he blame? The printer manufacturer for not providing the drivers! And the manufacturer works really hard to eventually provide those drivers, because otherwise they end up losing market to those who do solve compatibility issues fast.

Now our Joe User tries the same trick with Ubuntu — say, his brand new Canon scanner doesn't work with it (my brother's doesn't, so the story is not too much made-up). Whom does Joe blame for that? Canon? Hell no! He blames Canonical, Ubuntu community, XSane team, he blames all the Linux gods and devils, Torvalds and Stallman inclusive, but he doesn't blame Canon. And his idea is simple: the scanner is in order, it works in Windows, so if it doesn't work in, say, Debian, the problem is entirely Debian's, since the thing itself is working, it's Debian that doesn't — and this idea, though universally shared by users, is completely and utterly wrong, isn't it?

That's the current state of hardware-related things in FOSS world, and the result is FOSS companies spending huge efforts on doing hardware vendors' job, the efforts that could and should be spent on improving the software itself!

Those are the arguments I wanted to point out. And I'm deliberately speaking about hardware issues only, because in the software part of *.0-thing I do agree with you utterly. Though I still think that if Community manages to free up some currently hardware-bound resources, those will be quite enough to solve a hell lot of software-related issues you talk about.

Victor J Kinzer said...

Thank you for your well expressed response. I am all about good debate and I really want people to disagree with me the way you have (and the way the one response on linuxtoday.com has). What I dislike is flames with no thought behind them.

I totally agree with you on the hardware scenario you express. Here is the "hardware support" issues I have problems with.

I have a Linksys Wireless-G USB device which worked beautifully in Feisty. And as I said my sound card worked beautifully in Edgy. The move to feisty that caused that hideous tear down the whole system flaw is kind of inexcusable. Not that it showed up, but that after it showed up, and they got bugs about it. And they did, I have checked, they didn't provide so much as a way to roll back to the older code in Edgy. Nothing, not a workaround, no addressing it at all.

The wireless issue is a bigger one. Out of a desire to support the Broadcom chipset which requires a proprietary firmware blob or ndiswrapper they have broken one of the only cards they supported out of the box. The Linksys Wireless G detects, and can see networks and because of their insistence on supporting the Gnome Network Manager which is no where near stable, and no where near 1.0 the old Atheros cards are a nightmare now. They didn't even bother to "offically not support them" in Gutsy. They included the driver, they included the detection, even thought they knew from a ton of complaints and bug reports that once installed they just wouldn't work properly. They also include IPv6 implementation for networking without any software to detect refusal from a router. If they had taken the time to write a program to see if the router was accepting their IPv6 implementation and turn it on or off accordingly I would applaud them including it I would applaud them moving forward on new technology, and this isn't hardware support, this is protocol implementation.

Thing is they need to take the time to get people to look at these things and identify them, and that takes releases. I just think they should differentiate between their business enterprise releases (LTR) and their community releases with Short Term support. Fedora and Novell do this, but with a more strict differentiation that Shuttleworth doesn't want to do, for legitimate reasons, but he needs to find a way to institute the quality controls in specific releases that will get OEM and enterprise support, while still getting the other stages out into users hands, and hopefully find a way to communicate what each release really is to the users so their expectations are realistic. That is what is not happening.

Evgeny "nekr0z" Kuznetsov said...

I do agree about this release unreadiness, as well as about regression issues that definitely should not hit release anyway. I even have an example myself, the famous bug that denied the users with Japanese, Finnish, Russian and some more locales running the restricted drivers' manager in Gutsy. The workaround is really simple and the bug is already fixed in updates, but nevertheless the broken packages stay on the official 7.10 CD/DVD, thus denying inexperienced users one of the significant Ubuntu's features (the restricted drivers' manager is hardly required more than once, and that is during the initial setup, before the updates actually).

This example, as long as a lot less surfacial (and indeed more complicated and less addressed) issues certainly shows us "something is rotten in the State of Denmark".

And as the matter of fact, I do encourage the idea of having a separate bunch of releases that are concentrated on flawless work of those features that are provided, not on the number of those features instead. And I utterly hope that the Ubuntu's LTS releases come to serve this purpose eventually.

The flaw behind the LTS idea is that the Ubuntu LTS releases are not developed by Canonical, at least no more than the other releases are. Both LTS and "ordinary" Ubuntu releases are developed in the same way, and that's what puts great difference between Ubuntu's and, say, Red Hat's approaches. And yet while there are several advantages as well as drawbacks in both ways, I believe Canonical has not yet found the proper LTS strategy. It's hard to judge by now, though, since we only have seen one LTS release so far (6.06 Dapper Drake), the second one to come next spring.

And they are struggling to find that proper strategy, too. I have read some of Mark Shuttleworth's statements recently (though I can't provide a link right now, somehow lost the track of it), and what he suggests sounds like a very sensible idea to me. Starting with the upcoming 8.04 release the milestone release itself, according to Mark, will not be considered LTS right away. Instead, some work on fixing bugs and various issues will be done right after the release, resulting in 8.04.1 update release which, eventually, will be annouced as LTS. My opinion is that this strategy, if performed properly, will result in sound and stable LTS release I'm looking very much forward for. And doesn't this strategy comply with the point you have so reasonably stated in your post, too?

And if it's done, and if that new LTS release receives sufficient backporting activity (which 6.06 LTS indeed doesn't, and that looks like the reason a lot of users have upgraded to 6.10 and further on to me, myself inclusive), it can well be the very release that will persuade hardware vendors to offer more support, providing them with the solid and stable long-time base for their driver-engineering. Putting myself in their shoes I certainly understand the difficulties to keep up with rapid changes and ever appearing issues accompanying the current swift flow of releases they are facing. Thus I lay huge hopes on 8.04 (or rather 8.04.1 LTS) release, and we're all going to see if it proves to be what I hope it to.

Speaking of Network Manager in particular, I'm having no issues with it detecting the lack of IPv6 support in the networks I connect to, neither have I ever suffered any great problems with Network Manager itself. I must confess, though, that on my system I enjoy an Intel wireless chipset splendidly supported by Linux, and that may be adding up to my flawless wireless experience. That said, I'm also "enjoying" the awful power-management problems on my system as well, so I'm not totally left out of the hardware problems made, I should admit, sometimes even worse as 7.10 was released.

The last thing I should mention is that I can see good progress anyway. Having bought a brand new laptop last July and facing a lot of hardware-related issues I've found quite a lot of users' activity on Launchpad around many of those issues. The fact that none of those were addressed in 7.10 release made me quite upset, but right now, after the first Alpha of the upcoming release hit the mirrors, developers are actively participating in the bugs discussions I'm subscribed to, eager to know whether the issues were resolved in Alpha and if they weren't, what can be done to have them solved in the upcoming second Alpha release. The users (myself included) are actively testing and providing debug information, so I think the Community will eventually sort the things out at last.

Anonymous said...

I certainly can't help but think the problem isn't inherent to versioning schemes but the users themselves.

I see a TON of valid points in your arguement, and I actually agree with most of them.

I disagree, however, that we need a new versioning scheme. Users, from Free Software and proprietary software worlds alike, need to get off the "bigger is better" wagon.

There's a view that "the next one is better" - Microsoft pushes that "Vista is better than XP" and Apple's pushing "Leopard over Panther" or whatever (I lost track of their cats.)

That's simply not true - one size does not fit all. Users with old systems obviously don't think Vista is better - it won't run on the hardware they already own!

Free Software has the opportunity to overcome this mentality. Everyone knows the real reason Vista is "better" is because software will be released on it in two years - that can't be said about XP. Free Software doesn't have this limitation - users (and indeed - distros) are able to freeze software at any given point in time. There's no reason why code written ten years ago couldn't still be getting security updates if someone decided keeping that code around was beneficial.

Developers and "hardcore" hackers know that they can download the source code to Compiz at any day of the week and build "the latest and greatest". Luckily, doing that requires the understanding that not only do you get the latest features, but that newest bugs as well. Until users grasp this concept there will be issues.

And as someone who advocates Free Software, I know people take a long time to learn...

Evgeny "nekr0z" Kuznetsov said...

And I should say Kevin has an important and true point in that. But if we get this point ad absurdum, the versioning is no longer needed at all. One could easily abandon the x.yy.zzz system in favour of YYYY-MM-DD type versioning, as they do in CVS in this case...

But the good old x.yy.zzz system is still in heavy use, and the reason is we should have some milestones, some releases, that are at least believed free of foul bugs and severe issues. The users expect x.0 releases to be production-ready, no matter how new they are. The software development process, on the other hand, is endless and ever-ongoing, which means bugs are solved every day, but other bugs appear every day as well.

And this is actually the purpose of preparing the release: picking some time during which no new features are being added, all efforts dedicated to fix bugs and solve issues instead. And then, at some stage of this process when not too many bugs are left (eliminating them all would take all the time from this Saturday till Apocalypse, and most of the Last Judgement time as well), the Release happens, and this Release should contain less bugs than the average amount of bugs in randomly picked cvs-versions, or at least it is supposed to. And in my personal (quite irrational, maybe) belief no Release should ever contain regression issues.

Judging by that standards of release quality, the late Ubuntu releases are not too good (could be worse, of course). The question actually is whether we (I mean the Community by "we") or Canonical should praise those releases (7.10, to be speaking of an example) as the Released And Ready For Use versions, which they hardly really are. Or perhaps we should abandon those pretences and keep them for LTS releases instead? Or maybe there is some other way we can solve this issue: the gap between what our releases are and the user's expectations of what a release should be? Should we educate users not to expect that much stability of a release at all? Or should we make releases stable instead?

Anonymous said...

@evgeny

I certainly understand that point - to say users don't expect production ready releases is absurd. :) However, there's an oft-repeated, seldom understood difference between the proprietary world and the Free Software world.

Free software is, more often than not, written to "scrath the author's itch" so to speak. For a free software developer software is "production ready" when it meets the need that brought on the coding of it.

For the proprietary world, that need is "the customer's". "Average users" see themselves as consumers of software but that viewpoint is useless for many types of Free Software. That is not to say that developers SHOULDN'T develop with other users in mind - but that is to say that it is unfair to assume they should: not all software is written with intention to be "production ready".

"And this is actually the purpose of preparing the release: picking some time during which no new features are being added, all efforts dedicated to fix bugs and solve issues instead."

This is exactly the problem I think - this statement takes a free software application and looks on it as if it is a product on the store shelves. Products need to be marketed, they need to be promoted, refined, polished, safe and ready to "Thing X" and it's understood it will NOT do "Thing Z".

Free Software, on the other hand, is often "living" and in a state of constant betterment. The idea of THIS kind of software is when it's "perfect" it's 1.0. Until then, it's not done, so don't expect it to be. And there will be no 2.0 since it's not perfect until it can't be improved anymore. Failure to understand grasp that not all software is a product is a user issue, in my opinion, and not a versioning one.

Evgeny "nekr0z" Kuznetsov said...

You are absolutely right, Kevin, speaking about the software products that are... well, it's even unfair to call them "products", is it, since they are not meant to be for end-users (though if an end-user can make use of this piece of software, he is absolutely welcome to, and that was the very paradigm that has put FOSS to where it is now). But what provoked the post being discussed were KDE and Ubuntu, and both these software projects are hardly subject to that point of view, though that surely can be discussed. And those are the software projects I was meaning speaking of all those Release matters.

Personally I think that a software becomes a product the very moment some company starts intending to make profit of it. By the way, do you remember the Bug #1 on Launchpad? The matter of this "bug" is Microsoft Windows having bigger market share than Ubuntu, and keeping that bug still open definitely suggests treating Ubuntu as a product, all consequences applied, at least from Canonical's and end-user's side.

The question is, what way the Community (taking the lesser broad meaning of the word, not including the honourable Joe Enduser in Community) is to treat Ubuntu — should we turn to care about Joe Enduser more? Ubuntu Code of Conducts suggests we should...

That said, I'm really glad you've stressed that point of utter difference. Keeping in mind that not all the FOSS projects are indeed products (or should I say very few of them actually are) is absolutely necessary!

Anonymous said...

Absolutely. :)

Personally, I'm a member of the "screw adoption let's make it right" crowd. It's an opinion I've taken knocks over because I believe popularity doesn't really help (the number of unclosed bugs that move from one release to the next on Ubuntu increases with it's userbase).

Ubuntu's goal to dethrown Microsoft is irrelevant to me but you bring up some good points and it IS relevant to this discussion.

However, Ubuntu isn't a single thing. Ubuntu is a mesh of many applications; some of which are true Free Software products and some which are cranked out by a high schooler trying to improve his coding skills. Canonical's inclusion of software that is non-production ready (in terms of features) stands contrary to the logic throw out here regarding versioning.

Ubuntu would certainly become a more stable distro if they decided to take (say, 2 years) time to squash bugs (Like Debian stable releases, for that matter) but at the same time they'd be plagued for two years with the lack of features that are "cutting edge" - Compiz Fusion is sporting a whopping 0.6 version and it was, for a YEAR, a prime goal of Mark Shuttleworth for inclusion. Bug #1 seems to support, in my opinion, my view that version numbers of insignifigant. Cram in the features that will get the highest number of users is the goal of Ubuntu - popularity, not quality, will close that bug. If the existance of Ubuntu itself didn't indicate it - buggy new features attract more people than stable old ones. Else we'd all be running Debian Etch or gNewSense or Ubuntu Dapper.

You threw out an example before of using CVS/SVN/git like versioning - (kdenetwork-2007-12-14-14-45-33) and, thinking upon it now, it appears that that might actually be a GREAT way to do it - after all, if Ubuntu feels Compiz BY DEFAULT is meeting their needs the best, why couldn't they deem yesterday's CVS checkout of it "production ready"? Debian, on the other hand, could choose to "freeze" on the 0.6.2 release because their defintion of "production ready" doesn't include it.

I suppose, finally, that this entire discussion is pretty much just banter (It's work-safe entertainment, so I'm not complaining) considering that there's no true definition of production ready, even in the proprietary world.

Photoshop CS3 still can't (5 months after my company shelled out the $$$ to give it to the designers) scan an image from our scanner without seg faulting (is that the term for Windows crashes?). We've reported it to Adobe several times and their official work around is to "Use CS2".

At least you can still DOWNLOAD older versions of Ubuntu. :)

Evgeny "nekr0z" Kuznetsov said...

Well, you are totally right about Ubuntu. I highly praise it for the work they have done in providing the convenient way of setting up the proprietary drivers, they've done some huge and splendid work on that (though obviously not enough for every single user to be satisfied), but some other things about that distribution feel like lack of sense to me. Those can, I understand, look like benefits to other people (this point is splendidly accented in Victor's previous post).

Concerning our discussion, it doesn't seem completely useless to me, since it made at least three men think, which is a rare achievement these days, though otherwise it certainly is not of big use. Well, let us treat it as Maxwell treated theoretical physics then: he is quoted to say that theoretical physics was alike sex, since it too may eventually have material outcome, but that's not why people are making it.

And your note about Photoshop reminds me of the original reason I turned from Windows to Linux: in despair of trying to solve a couple of issues that really made me mad I received nearly the same technical support as you did, and finally had the idea that I can keep solving problems on my own without having to pay for some support that is of no use anyway. A week after my transition was done I realized that reason to be not the important one to make a switch, but Linux had already taken over me by that time...

suhail said...

I think the most important problem that linux community must solve is the lack of standardization in linux. it is very hard to support a platform that is not standardized. for e.g. look at firefox web browser. it is one of the most successful open-source project. it is well supported by almost all major web-sites because it is standardized. if you use firefox under windows then you can get the same experience even if you use it under linux or bsd or solaris. i hope linux community and vendors will address this issue.

Anonymous said...

Great work.