You may not think that artificial intelligence (AI) or machine learning (ML) have much to do with software testing. So far, software tests have not been a major part of the AI and ML conversation. But I’m here to suggest that they should be. In this post, I offer some tips on how you can use AI or ML in conjunction with production data to drive a smarter type of regression testing to [...]
Greg Sypolt, Director of Quality Engineering at Gannett | USA Today Network, maintains a developer, quality, and DevOps mindset, allowing him to bridge the gaps between all team members to achieve desired outcomes. Greg helps shape the organization’s approach to testing, tools, processes, continuous integration, and supports development teams to deliver software that meets high-quality software standards. He's an advocate for automating the right things and ensuring that tests are reusable and maintainable. He actively contributes to the testing community by speaking at conferences, writing articles, blogging, and his direct involvement in various testing-related activities.
Software quality is a journey, not a destination. Where is your organization in its quality journey?
If the answer to that question is not what you wish — in other words, if your software quality is not as strong as you know it can or should be — then it’s time to rethink your approach to the software-quality journey. For many organizations, that will mean reconfiguring the ways in which different teams interact, and perhaps even creating a brand-new workflow that accomplishes a higher level of quality through better software testing.
With these goals in mind, I’d like to offer some pointers in this post, based on what I've learned over the years building a culture of software-quality excellence in which every member of the organization owns quality. [...]
Greg Sypolt, Director of Quality Engineering at Gannett | USA Today Network, maintains a developer, quality, and DevOps mindset, allowing him to bridge the gaps between all team members to achieve desired outcomes. Greg helps shape the organization’s approach to testing, tools, processes, continuous integration, and supports development teams to deliver software that meets high-quality software standards. He's an advocate for automating the right things and ensuring that tests are reusable and maintainable. He actively contributes to the testing community by speaking at conferences, writing articles, blogging, and being directly involved in various testing-related activities.
The chief dilemmas of native mobile app testing include the lack of parallelism, the cost of concurrency with real device hardware, and the challenge of making tests efficient.
In this article, I’d like to discuss a solution that can help overcome each of these problems: Running your Google Espresso tests on every pull request, with faster feedback using emulators. While this technique should not be the be-all, end-all of your testing strategy, it can help to meet your speed goals while still providing broad coverage and accurate feedback [...]
Greg Sypolt (@gregsypolt) is Director of Quality Engineering at Gannett | USA Today Network, a Fixate IO Contributor, and co-founder of Quality Element. He is responsible for test automation solutions, test coverage (from unit to end-to-end), and continuous integration across all Gannett | USA Today Network products, and has helped change the testing approach from manual to automated testing across several products at Gannett | USA Today Network. To determine improvements and testing gaps, he conducted a face-to-face interview survey process to understand all product development and deployment processes, testing strategies, tooling, and interactive in-house training programs.
By Greg Sypolt
You’ve heard of Artificial Intelligence (AI). The term has been around since Allen Newell, Herbert A. Simon, and Cliff Shaw wrote the Logic Theorist in the 1950s.
Historically, it’s safe to say you haven’t often heard AI and test automation discussed in tandem. But that is changing. AI testing automation is poised to play an increasingly important role in the future of automated testing.
AI test automation is still a relatively new concept to me, but it’s also one that I am exploring eagerly as I work to stay at the fore of the automated testing field. In this article, I want to take the opportunity to highlight why AI testing is so important, explain how AI bots can be used in automated testing, and discuss some of the challenges that we still need to solve in order to make the most of AI testing.
Read the entire blog post
by Greg Sypolt
When learning about the ability to capture network traffic by using my existing Selenium scripts or the headless test framework – PhantomJS scripts, I was excited. A whole new set of tests is about to be added to the continuous integration (CI) pipeline. We often come across requirements when we need to capture and analyze browser network traffic in real time to find HTTP status of the page, examine the headers, validate parameters, do performance analysis, and more. Just another testing strategy to protect the end-user experience when they are using your web application in real time. Read more...
by Ashley Hunsberger
Behavior Driven Development, or BDD, can help get your teams building the RIGHT product. Although I’ve heard the term used interchangeably with Test Driven Development (TDD), I personally see it as an extension of TDD to help your team focus on the business’ goals. While TDD provides tests that drive development, those tests may or may not be helping you meet those goals.
Read more at http://bit.ly/1Z1zR12
By Greg Sypolt
Using Cucumber with outlined best practices in your automated tests ensures that your automation experience will be successful and that you’ll get the maximum return on investment (ROI). Let’s review some important best practices needed before you start developing Cucumber tests.
Read more at: http://bit.ly/1Ubzdyv
by Ashley Hunsberger
2015 was quite the year for quality in almost every industry. Here are some defects (some disastrous, some just funny) that really caught my attention over the last year, and a few lessons we can learn from them as we develop our own test strategies such as data usage, environments, security, and more in our day-to-day work.
Read more at http://bit.ly/1o0iSQ2
by Ashley Hunsberger
Even if you aren’t directly responsible for performance, it is important to consider it under the umbrella of quality. As a tester, how do you move forward and help drive performance quality (especially when you are new to this area, like me)? What are the ramifications of not considering performance within QA? Let’s take a look at what performance is, the questions QA can ask during design and implementation, some of the types of testing that can be done, and making performance part of your acceptance criteria (and, therefore, part of your Definition of Done).
Read more at http://bit.ly/1Q3boEN
by Greg Sypolt
I was inspired by Denali Lumma (@denalilumma) when she delivered a glimpse of the future in her talk about 2020 testing at the Selenium 2015 conference. The session was an excellent introduction that compared many scenarios of the minority testing elite versus the more common development team. The elite companies consider infrastructure FIRST, and the majority thinks about infrastructure LAST. It got my wheels turning regarding the future of software development. I don’t have all the answers right now, but I want to be part of the movement to plan and build architecture with quality. A few words come to mind when thinking about quality architecture — automation, scalability, recoverability, and analytics.
Read more at: http://bit.ly/1l0AFEs