API Testing Is Critical
owen
Hi @Adam Sandman! I have a question to kick things off. You started Inflectra almost 15 years ago, what has changed the most in the software industry over the past 15 years?
Adam Sandman
Hi @owen that's a good question. I think one of the biggest changes is that we expect "consumer" level usability and service in everything we do. When we started, you could create software for a business niche and it would acceptable for the user to download it, install it, read the manual, have multiple in-person meetings to get it up and running. Now we expect to be able to open up any application or system and be able to understand how it to use it right away.
You have to make everything so easy to use that with a couple of videos and some links you can get going. The move to SaaS / cloud really helped. I remember when we first started selling SpiraTest, it was only download, and you had to have a running webserver on your laptop to try it. We didn't yet have reliable screen sharing like GotoMeeting, WebEx or Zoom, so I remember trying to explain to someone on the phone for 1 hour how to get everything installed and working.
Then after that we realized that he needed testing software for students/exams, not for software, so it was a waste of time.
I cannot imagine having that same interation now, 15 years later !!!
We expect to be able to signup for a new service (for anything) and be up and running in 5 minutes, be able to figure out how to use it in 1 hour, otherwise you're toast.
Tom Stiehm
@Adam Sandman What do you see as the biggest challenges to testing right now and how can we overcome those challenges?
@Tom Stiehm good question, hard to answer (!). There are many, but the problem seems to always be that we need to do more testing in more ways (performance, security, usability) but the time to market with easy cycle gets shorter. So how do you do more with less. Automation seems to be part of the solution, but we see so many companies struggle with automation.
Overcoming them is harder. For example one thing is to avoid the perfection trap. What I mean is that if you cannot test everything, often you feel 'despair' and test nothing. So if you can test some performance (for example) of the parts of the app that you know can be slow or are critical, then do that. If you find something, then look deeper. So don't ignore simple and easy / quick wins even if that means you don't test every part.
Adam Sandman
@Tom Stiehm what do you think?
Tom Stiehm
I think one big challenges will be moving out the the manual testing only mode many organization employ now. There are good things about test automation and it requires both a testing skill set and a test automation skills set. Most orgs haven't focused on test automation as a skill set so it is largely absent from them.
I have worked with really good test automators and they are hard to find. Part of it is the mindset that Testers should be cheap because it is a less valuable skill set than programming.
So I would sum up the challenges as:
Adam Sandman
That's a good point. People who are good at test automation may not the same folks who are good at more freeform 'exploratory' style testing, so I think it does broaden the team. I have seen clients want to make their manual testings suddently do all automation and it often fails. Usually pairing someone who is more functional with someone more technical seems to woirk better.
Tom Stiehm
Growing test automation skill set
2. Getting orgs to value testers and test automaton as first class citizens in their quest to create great software.
Adam Sandman
I agree, and also not devaluing the skills and knowledge that the less technical testers bring. If you see testers as basically failed programmers (this is not my view so please don't hate, I'm just reflecting on what I see) then you fall into this trap.
Too many times the view is that "you're not a good programmer, so maybe you could be a tester" :scream:
I have seen seasoned programmers fail at test automation just as I have seen manual testers who are more functional experts fail as well.
Test automation is often harder than writing the app in the first place !!
Tom Stiehm
@Adam Sandman We had a lot of great talks at STAREast about AI and testing, what is your view of where AI can help testing?
Adam Sandman
Good question. We have been looking at AI and ML for our automated testing product. A revelation our team had a while back was watching a manual tester learn to do test automation. When testing normally they were basically thinking like a human and end user and trying to use the application. However when doing automation we are basically asking a human to behave like a machine. Manually copying CSS and XPATH from browsers, opening up the DOM, etc. So I think AI can look for patterns and reduce some of the burden
For example if you test the same application over and over, AI can detect how the app changes and make things easier. If we can have machines do what machines do best and leave humans act and think like a user.
AI could say "this app typically uses IDs that change every update" lets not use those to locate elements
(simple example)
So I think AI will make things easier and better.
Adam Sandman
One of the most interesting testing challenges we heard about from an (un-named) client was that they were testing missile guidance systems. They had to land a cruise missile into the three windows of a terrorist cell simultaneously and they had to test the guidance system in software to make sure it worked. They couldn't replicate the production environment as it were :wink:
michael.a.bolton
cf. the movie Pentagon Wars. It's a great movie for testers.
Adam Sandman
So they ended up writing a scraper for the GPS tracking screen and simulated all of the inputs and outputs from the missile and were able to write an automation framework that tested all of the signals in and out and could then read the commands to the mapping screen to see if it drew the things in the right place, and would have the correct waypoints
That was one of the most memorable for me at least
The cool part from a testing perspective was that they realized that trying to test the graphic image of the screen was fruitless, but they could dig under the UI and get the XML commands that were used to draw the map, and simply test that. They were not testing the mapping system itself, so as long as the XML was right, they could "assume" (for their purposes) the map was right. That made it much more testable.
They also wrote a custom xUnit plugin for Matlab to do some other stuff, but that was cool as well
owen
Got another one for you @Adam Sandman. For someone who has a background outside of software, what lessons can software testing and agile give to the wider community?
Adam Sandman
Good question Owen.... let me think about that for one second...
I think there are good lessons from testing and agile in the world of public policy. Agile teaches us that getting something initial out into the world to get feedback on (an MVP) is better than trying to devise the ultimate solution in a vacuum. However in public policy that is not how people operate, often. Imagine in politicians and other people in the field of policy used the agile approach. Legislation would not necessarily be so adversaral. Also testing teaches us that what we think will happen is not what actually happens.
Testing allows us to determine the risk of releasing a new feature before we release it. If in public policy and other fields, if we did actual testing and were open to changing our minds and our ideas, we could be so much more effective. For example if we decided to release a new law on an experimental basis, citizens in state X can try out this new health care law, and after 90 days we do a retrospective, look at the test results. Then on state Y we try the v1.1 of the law and measure those results.
Iterative public policy :wink:
TDL = Test Driven Legislation
owen
That’s a very interesting take and application on iteration in action.
Adam Sandman
thamks
*thanks
owen
I wonder if companies like Netflix might start iteratively creative movies. Publish a movie. Release it on Netflix. Receive feedback. Edit the movie based on the feedback. Re-release. Might make for a fun experiment at the very least!
Adam Sandman
Ha that's interesting. Or certainly don't film the entire season. Release a couple of episodes and track the viewership, see which characters and subplots resonate and then focus on thise
*those
editing the already released movie feels a bit 1984
I can see the temptation, think of a movie like American Beauty, it won oscars but now we find it hard to watch because of the leading actor. Would they edit it to have someone else and re-release?
owen
So basically refilm just those scenes? That would be interesting. I can see redoing the movie in it’s entirety, but I think it would also be cool to make modifications to the movie like you suggested just to see how it looks. Not sure how expensive that would get either.
Adam Sandman
That's true... imagine scene with the original actress and the actor in question, if they re-film it, the actress would be much older and look different, but they could CGI in a new leading actor. or a deep-fake of just the head
Oh jeez now we're getting all Black Mirror on a Thursday
BTW. The creator of Black Mirror (Charlie Brooker) said he's taking a break because the real world is worse than even he could imagine.
(off topic)
On a more serious note, another lesson from software testing would be the use of automation.
People always fear automation will take their jobs. The same is/was true of testing. We have found that as we automate a lot of the manual scripted testing, we have more time for freeform exploratory testing.
In a factory we replace assembly line workers with robots, but now we can customize the end product, so that humans can apply a 'handmade' finish to a base product that is machine made. Especially if we move to manfacturing where the core product maybe made centrally, but the final assembly or customization is done closer to the customer.
owen
How does that affect business value?
michael.a.bolton
I find it very troubling to hear that a machine made something.
Adam Sandman
I am not an expert so please take this with a large heaping spoonful of salt
michael.a.bolton
Machines don't make things. People make things, and the machinery has a role in making things.
Adam Sandman
We originally had artisans who created everything handmade, probably close to the place it was being used. We automated the process to have an assembly line, treating humans to a large degree as autonomous beings to perform a repetitive task (attach bolt X to assembly Y). Many factories have replaced that with automated assembly lines where robots perform those tasks. However as Tesla found, there is not always a positive ROI in automating all tasks in such a factory.
In testing I have seen people try and automate a long complex business process that they will only run a few times. Usually they have negative ROI of automating it. If they just automated the basic login screens and then let a person test some of the business flows, the overall ROI would have been better.
michael.a.bolton
Yes. And here's something to be learned from that: Toyota became aware of that problem quite a while ago. And Elon Musk didn't learn a damned thing from history.
Adam Sandman
Absolutely, and in testing we don't always learn from history either sadly
Also in response to the phrase 'machine made something' your point is well made, I was just paraphrasing.
michael.a.bolton
Yes, Adam! The amount of effort that people spend on building a machine to push other buttons on some other machine that they've just built!
Adam Sandman
I know... When we do demos of our automation products to customers we can quickly see who is going to fail. They don't look at that
they think automation = cheaper, better, faster
michael.a.bolton
I know that I'm fussy about language. I'm that way because it's a both lens and a sound system for our thoughts.
Adam Sandman
not automation = good at repetitive mindless tasks
I was also typing whilst eating lunch which didn't help :wink:
by the way I do think that WHILST is a much under used construct in American Englishg
michael.a.bolton
Yes. We'd be in much better shape as a craft if we called "automated testing", as most people perform it, for what it really is: automated typing.
Adam Sandman
yes, automated clicking, typing. Basically its action automation
automating actions that a user might perform
and automated checking that something that appeared matches what we expect in response to the action
I'm curious, did Tesla end up scaling back automation in reality?
I wasn't sure what actually happened, vs. what the newspapers reported on?
michael.a.bolton
Ah! Harry Collins offers a nice scalpel for that. He distinguishes between actions and behaviours. A behaviour is an observable, describable physical thing. Action, for Harry, is behaviour plus intention.
Adam Sandman
everyone lost interest once Tesla starting hitting its production targets
Hmmm. interesting. So if an script simulates a click on a button, that is not an action in his definition
michael.a.bolton
Yes. Tools can automate testers' behaviours, but not testers' actions.
Adam Sandman
because the script has no intention. The creator of the script had one, maybe in a comment if we're lucky
that's interesting, I will have to think about that
michael.a.bolton
I've found it very useful.
Adam Sandman
sometimes people call them events
e.g. I will be simulating a click event on that button
event always makes me think it's more observable though
michael.a.bolton
The creator of the script produced a recipe for the behaviour, but without that comment (or without collective tacit knowledge about the intention), we have to infer the intention.
An event is observable.
Adam Sandman
So actually it sounds like a very good rationale for making comments mandatory in any automation system, especially comments that describe some of the 'why' not just the what
ADT, action driven testing?
vs BDD
somehow encapsulate the intent
michael.a.bolton
Testing is always action driven. If there's no intention, there's no testing. (edited)
Adam Sandman
True, but how is the intent captured in a way that can be understood by others. that is what I think trips up teams. When I did this test I had in my mind a specific intent. When I discuss with the developer a week from now, I may not remember or the developer may dispute my intent
I have seen that lead to arguments and much gnashing and wailing
michael.a.bolton
Among adults?
Adam Sandman
oh yes
michael.a.bolton
I wouldn't say so.
Adam Sandman
I've had people run crying to the bathroom
michael.a.bolton
Well, I'm being a little glib.
Adam Sandman
and it's because they case SO much about the quality that it makes them upset
michael.a.bolton
But what is the argument actually about?
If they understood quality, they'd be fine.
Quality is not WHAT I WANT, RIGHT NOW. Quality is value to some person; multi-dimensional; emotional, and not strictly rational.
Adam Sandman
usually that the tester found something and cannot quantify it because there are still unknowns. The developer is frustrated because they want to "fix" it but there is not enough context to do that yet.
michael.a.bolton
The first step in addressing disputes like that is to recognize that different people consider different dimensions of quality in different ways, and that's okay.
Adam Sandman
So both are frustrated with the lack of context or knowledge.
michael.a.bolton
And that's okay.
Adam Sandman
I agree. Usually take a step back and assess the big picture.
michael.a.bolton
Because the confusion goes away.
Adam Sandman
However if you throw in an artificial deadline and general stress then you get into conflict.
michael.a.bolton
Ah, that's the adult thing again.
Adam Sandman
A good manager will work to prevent that stress etc.
Absolutely. but sometimes that's missing in the equation
michael.a.bolton
And good professional people will tell baby, calmly and firmly, that he can't have candy right now.
Adam Sandman
then baby kicks the tray of food over the adult :wink:
(just kidding)
michael.a.bolton
James Bach has this wonderful thing: what's the difference between a doctor and a drug dealer?
Adam Sandman
a piece of paper :wink:
michael.a.bolton
No, sir. :slightly_smiling_face:
Adam Sandman
I don't know, what's the difference between a doctor and a drug dealier?
michael.a.bolton
A doctor provides customer service. A drug dealer provides customer satisfaction.
As a tester, I want to provide customer service. That means that I must be emotionally prepared for my client's emotional reaction to news about the pain.
Here's a lesson that testing can give to the wider community: it might be a good idea to pay attention to real journalists.
People who seek out actual, justifiable facts about things in the world, as distinct from people who merely opine about things they like and don't like.
Adam Sandman
Amen to that
michael.a.bolton
Real journalists, it seems to me, go out and gain experience with the world.
Real testers gain actual experience with the actual product.
Adam Sandman
I would say that any professional in software development, project management and testing should be willing to speak truth to power, agreeing with the client to just 'get along' does no one any favors
michael.a.bolton
They often use tools to help them with that, but the tools are not the central thing. The real journalist talks with real people, and finds out how things are really affecting them.
Testers need to remember that too.
Adam Sandman
I agree.
doctors will tell you the bad news so will good journalists.
Adam Sandman
Thanks everyone for having me as the guest today. I hope y'all stay safe and sane and looking forward to seeing you virtually at Agile DevOps West next month..... Thanks @owen for organizing.
owen
Thanks Adam!! See you virtually at #agiledevopsvirtual then virtually again at InflectraCon!