First public APE release
I originally wrote APE at Philips, to test the web app SoftFab. It was sparked by a rather embarrassing bug that a user found shortly after we released a new version: the presentation code of a particular page got broken by a recent change we made, so the user saw an error page instead.
Because the bug was in presentation code, the unit tests didn't catch it. PyLint didn't catch the bug either, but given how hard it is to do static analysis on a very dynamic language like Python, I don't blame it. We used development snapshots of SoftFab ourselves, but apparently we didn't use this particular page very often.
We clearly had a gap in our test strategy. A web search for testing at the presentation layer turned up frameworks like Selenium. But those would require writing test scripts for every page, which would be a lot of work. And those scripts would be relatively brittle, since the user interface frequently changed, so keeping the scripts updated would also take considerable effort.
The reason I described the bug as "embarrassing" earlier is how easy it was to trigger. Just clicking random links like a monkey would have uncovered it. But SoftFab had so many pages that doing this by hand before every release would be both time consuming and mind numbing. We needed an automated monkey.
What the monkey should do is request all pages and check if the server's response looks good. However, since SoftFab is a very dynamic site, the contents of some pages change a lot depending on the query that is requested. To get decent test coverage, we would not only have to request each page, but try several different queries for each page. I wrote a web crawler that, given a starting URL, finds other requests to try in the returned HTML's links and forms.
To determine whether the response looks good, the first thing we look at is the HTTP status code. If it's 200 (OK), that's a good sign. If it's 500 (Internal Server Error), we triggered a bug. If it's 404 (Not Found), we discovered a broken link. So even before interpreting the page contents, we can already detect bugs.
What else can we do to check a page that we have no specific knowledge of? We know it's HTML, so we can check whether it's valid HTML. If it's not, then that could point to a bug. In any case it's better to output only valid HTML: while web browsers are very robust against invalid HTML, what they correct it to may not be what you intended.
At this point the test tool had become more than just a monkey clicking random links, so I decided to call it APE, inspired by the librarian from the Discworld novels.
You can read more about APE on its home page.
An open source release of SoftFab is also in the works, but we need a bit more time to finish cleaning up the code and documentation.