There are very few people in the software testing field who have dedicated their career to help other people get better at software testing. I have worked with many excellent testers and Alan Richardson is by far amongst the most talented people I have worked with. Alan’s grasp on the software development, exploratory testing, and test automation is exemplary, but most importantly, his ability to dig deeper in the subject, and explaining it in a way that makes it easier for others to grasp, is his biggest strength. Alan has made huge contribution in the software testing field with his presentations at the conferences, blogs, articles, workshops, books, courses, and so on. The most important aspect of his work is the practicality as it is backed by the years of experience and commitment to learning, consciously and continuously.
It gives me a great pleasure to introduce Alan to our readers and ask him few questions about the software testing craft.
Anand: Alan, thank you for taking the time out to talk to us. I am sure many people in the software testing community already know you as ‘the evil tester’, but still, please tell us something about yourself.
Alan: Thanks Anand. I work as a Software Development and Test Consultant, helping people improve their testing and development processes. I spend a lot of my time continuing to improve my Software Testing and Programming skills.
I’ve been trying to spend a bit more of my free time doing something else, but that means that my retro game collection is expanding and I’ve started adding Japanese Famicom games to my collection now.
Anand: We have worked together many times, but guess, I never asked you about the name of your blog. Why did you name it as evil tester?
Alan: I started drawing Evil Tester cartoons when working on waterfall projects to alleviate the stress of a poor process. And I kept saying I would buy the domain name. A few years later, I realised that no-one had bought the domain, so I bought it. I find the notion of “Evil” in Software Testing as liberating and frees me up to test systems in ways that other people don’t. I tried to cover this in “Dear Evil Tester” and a few videos online.
Testers need to do the things that other people are not prepared to do. To do things that other people would not conceive of doing. Things that other people do not think are ‘right’. In films and books, the people that do this are often the bad guys. If we only stay on the path of righteousness, then there is a whole other set of paths that we haven’t explored – who else is going to do that?
Anand: Alan, when did you become passionate about Software Testing? Was there any trigger, did you develop interest over a period of time or was it love at the first sight?
Alan: I started by wanting to improve Software Development – I thought that by building CASE tools which would generate all code from diagrammatic models would prevent all Software Development problems. I swallowed much of the nonsense I was fed at University.
When I started programming Software Testing tools for a Testing Consultancy, I realised that if I moved into testing then I would be able to code, automate, test, design and manage. And only by utilising all those skills could I finally learn to prevent all Software Development problems.
When I started paying attention to testing, I realised that in addition to the technical IT skills and test techniques, I also had to learn more psychology, influence and soft skills than I needed as a developer. I think it was the constant learning and the need to use ever more skills that really drew me in.
I’m still working on preventing all Software Development problems.
Anand: You must have witnessed quite a few changes in our field. What do you feel about these changes? Which changes are good, which ones are not so good, and what changes would you like to see in our field?
Alan: I like the movement towards leaner and more agile development approaches.
I like that more and more people are adopting an exploration basis for their Software Testing process rather than defining scope up front and attempting to script away the observation and investigation parts of Software Testing.
The changes I would like to see are more people learning more of the Software Development skill-set and trying to blur the edges of their specialisms. That doesn’t just mean programming, that also means design, systems architecture, technology, analysis. The whole range of Software Development skill sets.
I also think we might have forgotten about model based approaches and I think I’d like to see more research into those.
Anand: What can a tester do to stay relevant in the AI/ML driven world with interfaces such as Alexa, Siri, and so on replacing the UI completely?
Alan: Until we actually work on those technologies it is hard to stay up to date with them. But hopefully, everyone who is testing software is trying to test below the UI of whatever system they are working on. In that way they can reduce their dependence on a UI to guide their testing.
And the more that people learn about Systems in general as opposed to specific technologies, they will develop the thought processes that allow them to apply their testing knowledge to any software.
Anand: How do you define boundary (of what to test) when the bulk of the complexity is handled by the services such as Alexa and Siri? How do you safeguard your application?
Alan: How we test it will partly depend on how we build it. The conversational interface creates a slightly different set of abstractions than we are used to between ourselves and the underlying functionality.
The parsing and matching process we choose to use will either limit or explode the combinations of data that we have to consider as input.
The hard part is probably going to be distinguishing between testing our app, and testing the interface which we have delegated to the voice recognition software, e.g., we are going to be relying on Alexa and Siri to distinguish between accents, but we’ll have to make sure that the application isn’t vulnerable to odd commands if the accent recognition goes wrong.
At the moment the conversational interfaces don’t seem to be much more complicated than the text adventure games of yore, but clearly the functionality sitting behind the interface could have a lot more impact than the sandboxed environment of an adventure game.
Safeguards will have to depend on the application we are using it for. We might find that some systems will have to play back the command before it is exercised, so we can review it. And we might even find that some systems will textually render the commands we verbally give them, so that we can review the command before it is exercised and avoid the ambiguity that arises when we listen to voices.
In the same way that we safeguard verbal orders to people. Sometimes, we replay them back for clarification, and when we have a verbal agreement, only then do we act on them.
Anand: How do you check if the AI / ML driven systems are ethically and morally correct? How do you ensure that the dataset they got for the training wasn’t biased? What can a tester do to help the organisations?
Alan: Ethics and morals will depend on the functionality that sits behind the interface. As with anything software related, we’ll have to try and specify what we mean by ethics and morals related to the functionality under test before we can say much about it.
Biased might not be a useful word to describe the data sets. We need to make them representative of the variability that the system will be exposed to in the real world. And that can be very hard. Fortunately, if we are working with something like Alexa and Siri, then we have an abstraction between us and the application so we really care about the results of the translation that Alexa and Siri give us. It becomes much harder when we are responsible for writing the translation routines between the real world and the system. And then it becomes even harder if there is not intermediate translation layer and the input directly results in a response.
The issue that many people face with AI is that the training sets are historical data and our cultural norms change over time. Such that there may be a perceived cultural bias in the training data sets. This and the notion of ethics is still being discussed by specialists in the AI field, so I’m more likely to let them investigate the ethics and morals for the moment. A recent overview of the topic was covered in a Microsoft Research podcast “Keeping an Eye on AI with Dr. Kate Crawford”
Hopefully we won’t try and use AI and ML for everything and we’ll still have some algorithms in there that we can test.
It is going to be an interesting set of systems to test and I’m slightly envious of the people who are being able to test them early.
Anand: What would be your advice to the testers who are not satisfied with how the software is delivered in their project? What should they do to bring the change?
Alan: There are two things to separate out there – how to deal with dissatisfaction, and how to effect a change.
Dissatisfaction is easy – start drawing Evil Tester cartoons – or your own variant thereof. I don’t recommend resorting to Voodoo.
The reason for drawing it, or writing it down is to be clear what you feel dissatisfied with. It’s important not to generalise at this point, be very specific about what you feel dissatisfied with: what happens, when, in what way are you dissatisfied, etc.
And perhaps, they don’t bring the change.
Check if other people feel as dissatisfied as you do, and if their analysis of the dissatisfaction is the same as yours. If it is then find a way of explaining the specific situations that trigger your dissatisfaction, and how it makes you feel, to the wider team or people that you think can help.
Also consider that perhaps the change needs to be in your response, or your attitude. Those are, after all, easier and faster to change than other people.
Before you try changing the team or other people, do think through how people might react to your explantation of the situation – do you have evidence? Have you phrased it objectively enough? Is it accurate and specific or is it overly generic such that people can disagree with it.
I’ve learned to ask questions, and ask them at appropriate times, e.g., standups, retrospectives are often good times. And different phrasing of questions might help temper reactions, e.g., “How do we know that…”, “I’m not sure if we are …”, “Do we think that this can…”.
But this is a big subject to delve into. Sometimes, I’m blunt. Sometimes, I’m subtle. Sometimes, I just make changes. Sometimes, I ask if change is required.
Anand: Do you have any advice for the people who are working in the software testing field, but not sure about the future prospects? What can they do to stay motivated and move forward with their careers?
Alan: Future prospects are partly down to luck, and the geographical location you are in.
But you can influence your luck by:
- Researching and staying up to date
- Reading blogs, signing up for newsletters
- Create Google alerts for the topics you are interested in
- Practicing your testing
- Investigating the areas that you are interested in, e.g., ML, AI, Model Based Testing, Web Testing, Automated Execution, etc.
- Improving all your technical skills
- Creating a blog or a github account and releasing your notes and learnings
- Going to meetups and networking
I’m motivated by constantly learning, so I try to work in companies where I will learn new things. And I try to stay up to date and write about, or create videos of, my interests so that people associate me with the things I’m interested in, which hopefully means I’ll be able to work on those topics.
Anand: Why terminology (such as testing vs checking or QA vs testing) is so fiercely debated in our field? What is your opinion about this?
Alan: Terminology has an important role to play. And always has.
I read this yesterday in a Sci-fi novel from 1975: