PhpRiot
News Archive
PhpRiot Newsletter
Your Email Address:

More information

Follow-up: How PHP is Broken and How It Can Be Fixed

Note: This article was originally published at Planet PHP on 15 November 2011.
Planet PHP

A few weeks ago I wrote an article in which I complained about a few aspects of the PHP development process that I thought were inexcusable and harmful. The article was surprisingly well-received with only a handful of people responding by demeaning me; given the nature of the article, I'd say that is a big win! The generally positive response does not make up for the flaws in the article itself, however. You see, I started out writing the article with the serious intention of making not only an honest critique but also providing some practical feedback for how the process could be improved. I feel my critique was completely honest, but I failed spectacularly in my second goal, and any merit that the article had was overshadowed by its negative tone.

It is no secret that the PHP development process has never been a shining example of project organization or quality assurance. Until recently, some of the most important aspects of any project's development cycle were either entirely lacking or were ill-defined. Worse, there was little in the form of systemic quality assurance. Fortunately, the core devs did not ignore these issues, and they've been pushing really hard to improve on these areas over the past few years: They've created a new release process, a new voting procedure, and have been building up the automated test coverage. They're shaping the development process into a more open, clear, and modern process that the whole community can be proud of.

This type of transformation takes time, however, and they are not at the finish line yet. There are specific characteristics of the development team and process that are still very much a huge problem today. An acceptance of failing unit tests along with a lack of an automated release process that incorporates testing requirements resulted in the recent disaster that was the PHP 5.3.7 release. A lack of clear responsibilities continues to disrupt developer trust in the PHP project. Insufficient means of communication and insufficient use of those means frequently keeps the large number of non-contributing PHP developers in the dark. I'll address each of these individually for clarity sake:

Automated tests are ineffective if failures are acceptable

So much progress has been made in recent years with the PHP test suite - the coverage for PHP 5.3 is at 70%. Obviously this isn't ideal, but PHP is a massive project, so it is understandable that it takes some serious time to build the testing suite toward 100% coverage. The big problem with the automated PHP tests is not that there are not enough but rather that it is not a requirement that tests pass. To make matters worse, it does not appear that new code is required to have significant test coverage either. At the time of this writing, there are 128 failing unit tests in PHP 5.3 (the current stable branch), and there are 129 failing unit tests in PHP 5.4. There is actually a small decrease in test coverage between the two versions as well (70.2% in 5.3 vs 70.0% in 5.4); I'm not sure whether this is the result of existing tests being removed or new code being committed without corresponding tests, but it is definitely not a trend that should continue.

The consequences of accepting failing tests were never more apparent than with the stable release of PHP 5.3.7. The details of which have been outlined numerous times all over the place (including in my last post), but the gist is that a serious bug that broke a common security component was introduced into the PHP repository, the automated tests actually identified that the bug existed, but the release happened anyway. There has been no official explanation for how this happened, but it appears to be the result of an unfortunate human error. At the time there were close to 200 tests failing, so it seems reasonable to assume that a human may have missed that one of those failed tests was actually due to a severe issue.

There are a couple things that could be done in order to solve this problem. Defining a coverage goal would be an excellent starting point. Ideally this goal would be 100%, but any goal above 80% would likely be a huge step in the right direction. Once a sufficient coverage goal has been defined, all new features committed to the HEAD should be required to have corresponding tests that match or exceed the coverage goal. Unless existing tests are removed, this will ensure that the test coverage can only increase over time. In addition, all current automated tests should run successfully. There appears to be an effort to identify which tests should be expected to fail, but that only partly addresses the pro

Truncated by Planet PHP, read more at the original (another 6503 bytes)