Here is a software quality scenario which we see played out in Melbourne now, as explained by Senior Engineering Manager at Uber, Denali Lumma who spoke at a packed San Francisco Selenium Meetup in August and as a member of the planning committee for the Selenium Conference shared some insights into some of the tools that will change the nature of test automation and continuous delivery over the next few years as well as talking about some of the infrastructure and software that powers Selenium.
Lumma introduced Applitools, which allows automated testing of all visual aspects of an application which selenium cannot and showed a quick demo of an application being tested by stitching a base image together into a bit map for the engine to make image comparisons and make sure that pages are rendered as they should be. Applitool has an extended range of filters and features which help get around visual testing problems such as adverts changing.
Other tools which Lumma mentioned are static analysis tool Semmle which gives a holistic view of data including tests and presents it to technical and non technical staff to identify improvement areas whilst Hashicorp, which develops a range of tools for infrastructure and environments, is also relevant for the monitoring capabilities of products like Atlas.
For security testing, Burp is a suite of products which is starting to find it’s way into Uber as an end to end test solution from traffic inspection to automating the detection of vulnerabilities.
The main focus of this talk is what many Test Specialists in Melbourne should be able to relate to now, where the ideal picture is a functional CI platform that can power successful Selenium deployments in the future. Referring to test automation as being continuous integration and deployment, Lumma inspects why the majority of companies fall short on truly effective process.
It’s not related to Selenium, it’s related to the continuous integration platforms that are running Selenium. Most companies view infrastructure as something to be done last because it is trivial, it is low priority. It’s seen as something easy to maintain and not integral to the business, non impactful to the bottom line and over time it’s seen to be equally important, not increasingly important.
The minority of companies, the elite, see infrastructure as something to be done first, it’s the highest priority. It’s extremely challenging, at minimum it’s a distributed system and it’s best implementation is as a distributed information retrieval system, it has an immediate and direct impact on business and bottom line and it’s impact grows exponentially over time at your company.
What is important is the attitude of companies and Lumma says the elite organizations see an efficient continuous integration pipeline as a deep problem domain and a serious computer science problem which is high priority worked on by the most Senior Engineers and Architects yet the majority of companies see it as a shallow problem which is trivial and more of a workflow problem. This limited capacity for CI prevents the majority of companies from being able to scale whilst the opposite is true for the minority, whilst at the same time defects in only the minority of companies are found before they are committed to Master, which is always green.
Varying features of quality culture are baked in for the majority and the minority – the majority have a random process, the elite have a consistent process – the majority of companies think that *shitty* tests with some failures are ok, the elite think that this is not the case, test with poor network conditions with 100% pass requirement – the majority have a reckless velocity and ship code early, bundled or a long time after it is ready, the elite look at velocity responsibly.
Lumma says that although many companies use Selenium for test automation, almost no companies are successfully executing real continuous integration because the platform and tooling which powers it is broken.
It turns out that all people that we surveyed are using Jenkins for their CI system, which is insane because Jenkins was written in the late ’90s and it hasn’t changed much since. What is going on here, why can’t we get our CI situation cleared up? It’s like the Cobblers kids have no shoes, it’s not that hard, yet no-one’s investing in fixing the problem and we’re just wearing shoes with holes in them or something..
When asked if anyone was actually happy with Jenkins in AWS or whatever, no one was happy, at all. The overwhelming majority unhappy.
Why does Jenkins suck so badly? Why are there no real contenders that can replace Jenkins yet? It’s slow, it’s unreliable, changes are batched together – all these things are related by the way and also all of the contenders that have come up since, because there are a bunch of different CI companies out there but none of them are actually better than Jenkins by any significant margin and so it’s not enough to move to a new system because Jenkins is known and people know how to use it. You have to be ten times better to get people to move.
Taking these points into consideration, Lumma says that the CI system that will support successful Selenium deployments will have nine different attributes –
- It will be fast with near instantaneous results (containerization and the appearance of unikernels)
- All changes tested before they are merged into master. If they are not, then you are not practicing true CI.
- Version control allowing you to replicate your application rendered test that is consistent.
- Trustworthy results where non-determinism is not tolerated.
- Build and Test architecture templates and enforced best practices for build, test and deploy.
- Product Analytics and real quality feedback for stakeholders.
- Cutting edge technologies which will interleave with Selenium, a mix of visual, mobile, static analysis, performance, security and speciality tools.
- Praxis analytics or process data to see what works best.
- Dog fooding so CI tools are built with CI.