đŸ€– Day 21: Develop your AI in Testing manifesto

You’ve reached Day 21! Throughout this challenge, as you’ve explored different uses of AI in Testing, you’ve uncovered its many associated pitfalls. To successfully integrate AI into our testing activities, we must be conscious of these issues and develop a mindful approach to working with AI.

Today, you’re going to craft a set of principles to guide your approach to working with AI by creating your own AI in Testing Manifesto.

To help shape your manifesto, check out these well-known manifestos in the testing world:

  • Agile Manifesto - Beck et al.: This manifesto emphasises values such as prioritising individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan.
  • Testing Manifesto - Karen Greaves and Sam Laing: This Manifesto emphasises continuous and integrated testing throughout development, prioritises preventing bugs, and values deep understanding of user needs. It advocates for a proactive, user-focused approach to testing.
  • Modern Testing Principles - Alan Page and Brent Jensen: These principles advocate for transforming testers into ambassadors of shippable quality, focussing on value addition, team acceleration, continuous improvement, customer focus, data-driven decisions, and spreading testing skills across teams to enhance efficiency and product quality.

Task Steps

  1. Reflect on Key Learnings: Review the tasks you’ve encountered and consider the opportunities, potential roadblocks, and good practices that emerged.

  2. Consider Your Mindset: What mindset shifts have you found necessary or beneficial in working with AI?

  3. Craft Your Personal Set of Principles: Start drafting your principles, aiming for conciseness and relevance to AI in testing. These principles should guide your decision-making, practices, and attitudes towards using AI in your testing. To help, here are some areas and questions to consider:

    1. Collaboration: How will AI complement your testing expertise?
    2. Explainability: Why is understanding the reasoning behind AI outputs crucial?
    3. Ethics: How will you actively address ethical considerations such as bias, privacy, and fairness?
    4. Continuous Learning: How will you stay informed and continuously learn about advancements in AI?
    5. Transparency: Why is transparency in AI testing tools and processes essential?
    6. User-Centricity: How will you ensure AI testing ultimately enhances software quality and delivers a positive user experience?
  4. Share Your Manifesto: Reply to this post with your AI in Testing Manifesto. If you’re comfortable, share the rationale behind the principles you’ve outlined and how they aim to shape your approach to AI in testing. Why not read the manifestos of others and like or comment if you found them useful or interesting.

  5. Bonus Step: If you are free between 16:00 - 17:00 GMT today (21st March, 2024), join the Test Exchange for our monthly skills and knowledge exchange session. This month there will be a special AI in Testing breakout room.

Why Take Part

  • Refine Your Mindset: The process of developing your manifesto encourages a deep reflection on the mindset needed to work successfully with AI.

  • Shape Your Approach: Creating your manifesto helps solidify your perspective and approach to AI in testing, ensuring you’re guided by a thoughtful framework.

  • Inspire the Community: Sharing your manifesto offers valuable insights to others and contributes to the collective understanding and application of AI in testing.

:mortar_board: Support your learning and the community. Go Pro!

3 Likes

Hello all,

Thanks Simon for sharing the Agile, Testing Manifestos.

The mindset shifts that are beneficial when working with AI are:

  • Embracing the fact that establishing collaboration between AI & testers is the key. (consider as an augment rather than replacement)
  • Constant learning to keep up informed about the advancements in the field of AI.
  • Human-in-the-loop AI: AI can enhance my testing capabilities not replacing my judgement & critical thinking. So, I’m still in-charge:)
  • Do the right thing: I will avoid unfairness & protect privacy when using AI for testing.
  • Continuous learning loop: I will keep learning about new AI tools for testing.
  • Explainable outputs: I would try to understand the decision-making behind the outputs else it might lead to flawed results as it is crucial. Yeah, it is a bit complex to do this, but I would try to equip myself with the skills to identify potential biases & interpret complex AI models.
  • Focus on the user: In the end, both me & my friend AI does testing to make sure software works great for people. That’s why we are there right, to improve software to every one. :intellectual:
9 Likes

Hi my fellow testers, here is my response to today’s challenge:

Reflect on Key Learnings: Review the tasks you’ve encountered and consider the opportunities, potential roadblocks, and good practices that emerged

I think my key learnings in this challenge so far have been that we need to keep a critical mind when dealing with AI tool claims and not to fall for the hype, we need to evaluate them based on what they can actually do in practice rather than just what they claim

Consider Your Mindset: What mindset shifts have you found necessary or beneficial in working with AI?

I again think its critical thinking skills, e.g. is the output from this AI tool actually correct or even true, what are its biases & what dataset has it been trained on

Craft Your Personal Set of Principles

Here are my principles:

  • Critical Analysis: I will need to analyse the claims an AI tool makes about its capabilities

  • Critical Thinking: I will need to judge the output of an AI tool against other trusted sources

  • AI inputs & Ethics: I will need to know what the tools biases are & what dataset has it been trained on

  • Continual Learning: I will need to keep my finger on the pulse of AI testing tool development so that I do not fall behind

7 Likes

Creating My AI in Testing Pronouncement: A Travel of Reflection and Activity

Presentation

Setting the Stage: Day 21 of the Challenge

After navigating through various challenges and revelations over the days, Day 21 marked a critical turning point. This day presented an opportunity to synthesize insights and experiences gained from exploring AI in Testing.

Revealing Pitfalls: Investigating AI in Testing

The exploration journey involved delving into various applications of AI within the testing space, revealing a plethora of related pitfalls. These pitfalls ranged from algorithmic biases to challenges in data protection and security, underscoring the complexity of integrating AI into testing practices.

The Need for Careful Integration: Awareness of Issues

Recognizing the need for a careful approach towards integrating AI into testing activities was fundamental. Understanding and addressing the related challenges were vital steps towards successful integration.

Intelligent Audit

Task Analysis: Reviewing Key Learnings

Engaging in a thorough review of the tasks encountered during the challenge provided valuable insights. This analysis helped in identifying opportunities for improvement, potential roadblocks, and emerging best practices in AI testing.

Opportunities Unveiled

Exploring the tasks revealed various opportunities for advancement and improvement in testing strategies. These opportunities included the potential for automating repetitive tasks, enhancing test coverage, and improving overall efficiency.

Potential Roadblocks Identified

Anticipating potential challenges, such as technical limitations, ethical concerns, and resistance to change, was crucial. Identifying these roadblocks enabled proactive planning and mitigation strategies to effectively overcome them.

Emerging Good Practices Recognized

Recognizing and acknowledging emerging good practices in AI testing provided valuable insights into effective strategies and methods. Learning from successful implementations and case studies facilitated the development of informed approaches towards AI integration.

Mindset Shifts

The Need for Mindset Adjustment: Working with AI

Recognizing the need for mindset adjustment was paramount to navigating the complexities of working with AI. Embracing collaborative approaches and fostering flexibility were essential mindset shifts necessary for successful integration.

Embracing Collaborative Approaches

Transitioning from a siloed mindset to embracing collaborative approaches emphasized the importance of teamwork and cross-functional collaboration. Leveraging diverse perspectives and expertise fostered innovation and problem-solving in AI testing efforts.

Navigating Complexity with Flexibility

Embracing an adaptable mindset enabled flexibility in navigating the evolving landscape of AI technology and testing strategies. Embracing change and fostering a growth mindset facilitated continuous learning and adaptation in AI testing practices.

Cultivating Ethical Stewardship

Ethical considerations, including fairness, transparency, and security, were paramount in the integration of AI into testing activities. Upholding ethical standards and advocating for responsible AI usage were critical aspects of mindset adjustment.

Crafting Personal Principles

Drafting the Declaration: Guiding Principles in AI in Testing

Creating a set of guiding principles served as a roadmap for navigating the complexities of AI in testing. These principles embodied core beliefs, values, and best practices in AI testing, guiding decision-making and actions.

Principle 1: Collaboration: Enhancing Testing Capability

Recognizing the complementary role of AI in testing activities and leveraging it to enhance testing skill and effectiveness. Emphasizing collaboration between AI systems and human testers to utilize the strengths of both.

Principle 2: Explainability: Understanding AI Outputs

Highlighting the importance of explainability in AI outputs to facilitate understanding and interpretation. Advocating for transparent AI systems that provide clear explanations of their decision-making processes.

Principle 3: Ethics: Addressing Bias, Privacy, and Fairness

Proactively addressing ethical considerations, including bias mitigation, privacy protection, and ensuring fairness in AI testing. Integrating ethical guidelines and frameworks into testing processes to uphold ethical standards.

Principle 4: Continuous Learning: Staying Informed

Committing to continuous learning and staying informed about advancements in AI technology and testing strategies. Fostering a culture of lifelong learning and professional development to adapt to evolving trends and challenges.

Principle 5: Transparency: Essential in AI Testing Tools

Pushing for transparency in AI testing tools and processes to build trust and accountability. Ensuring transparency in AI algorithms, data inputs, and decision-making processes to enable scrutiny and validation.

Principle 6: User-Centricity: Ensuring Quality and Experience

Prioritizing user needs and experiences in AI testing efforts to ensure software quality and deliver positive user experiences. Aligning AI testing efforts with user expectations and preferences to enhance overall product satisfaction.

Sharing and Engagement

Sharing My Declaration: Inviting Criticism and Discourse

Actively sharing my AI in Testing Declaration with the community to foster dialogue and exchange of ideas. Encouraging feedback and constructive criticism to refine and improve the manifesto further.

Test Exchange Event Participation: AI in Testing Breakout Room

Engaging in the Test Exchange event and participating in the AI in Testing breakout room. Contributing insights, sharing experiences, and learning from peers to broaden perspectives and deepen understanding.

Conclusion

Satisfaction in Self-Discovery: Crafting My Declaration

Reflecting on the journey of self-discovery and growth in crafting my AI in Testing Declaration. Finding satisfaction in articulating guiding principles and beliefs in AI testing practices.

Contributing to Collective Discourse: Shaping AI in Testing Perspectives

Contributing to the collective discourse surrounding AI in testing and shaping perspectives within the broader testing community. Motivating others to reflect, act, and improve in their AI testing endeavors.

#Day21 #AIinTestingManifesto @ministryoftesting

5 Likes

Hello, @simon_tomes and fellow learners,

Thanks for this challenge. It allowed me to look into existing popular manifestos and documents.

It also gave me a window to do a critical evaluation of the popular manifesto pointers.

I have documented 6 AI in testing manifesto pointers for today’s task:

Also, I have explained my reasoning behind my testing manifesto pointers in this video blog.

Check it out here:

Do share your thoughts and feedback.

Thanks,
Rahul

11 Likes

Day 21

Create an AI in Testing Manifesto

This is quite the big task, but its worthwhile thinking big with big ideas! I will take my inspiration for the Agile Manifesto.

clears throat

To empower testers to provide even greater value to their teams and organisations, we will partner with AI.

  • Assistance from AI instead of replacement
  • Seeing for ourselves rather than dismissing AI
  • Transparency of models through testing instead of opacity
  • Testing for fairness and bias instead of accepting the status quo
Principles
  • Data used to train models should be from sources where the original owner has given permission.
  • Where AI can help us with wasteful or inefficient practices we will approach with an open mind.
  • Utilising AI in testing comes with a responsbility of continuing to hone our own testing skills.
  • AI in testing is best suited to structured, deterministic work, humans are superior explorers.
  • We will be vigilant wherever we see AI being used for nefarious purposes and challenge where safe to do so.
  • If AI is used to threaten the role or skills of a team member, we will show solidarity.
  • We will not accept the first answer given by an AI, the prompt can always be improved.
  • We will endeavour to use the cleanest language possible when interacting with AI, to get the best outcomes.
  • For accessibility, we will go beyond using AI and get real people with differing needs involved.
  • Where AI is used to replace an interaction with a team member, we will challenge this usage of AI.

Phew! That was pretty deep stuff. There have been so many great answers to go through as well.

8 Likes

My contribution.

Validation over Acceptance
The output generated by A.I. must be validated before it can be accepted. It is not known (yet), because of the lack of transparency, how the A.I. came to the conclusion of the output it generated. As the A.I. can quickly come up with proposals, checks must be done that be sure that these proposals are accurate. Always be aware that the given output is a ‘prediction based on a set of probabilities’

Ethical Responsibility over Technical Feasibility
We uphold ethical standards in the application of A.I. ensuring that our testing practices respect privacy, security and fairness

Working Together over Replacing
A.I. is there to assist us and not to replace us. This generation of A.I. does not have the ability to think as we humans can. This generation of A.I. is fast but not smart.

Learning from A.I. over Neglecting A.I.
As A.I. provides us answers to the prompts we are feeding it, we need to check whether it is accurate or not. Perhaps we did not compose the prompt correctly, we have to learn from that. The answers provided by the A.I. can be wrong. We need to provide feedback to the A.I. so that it can improve, learn from it.

Note that I ‘stole’ the principle “Ethical Responsibility over Technical Feasibility” from Rahul Parwal as it really resonates with me. I think his contribution for todays task is really valuable

7 Likes

Thanks for the feedback, buddy :slight_smile:

2 Likes

This is so cool. I appreciate this could end up being one mammoth task.

To come up with a manifesto, given how much weight we often put behind one, it’s good to see you all just go for it @poojitha-chandra, @adrianjr, @manojk, @parwalrahul, @ash_winter and @p.schrijver.

Well done! :trophy:

I feel inspired that we have started to collectively shape an AI in Testing Manifesto that perhaps we could all get behind. :bulb:

7 Likes

Simon we don’t ‘steal’, we ‘liberate’ :smiley:

1 Like

So I asked Bing to give me a manifesto as a curiosity, and one of word jumped out:

  1. Yield to AI: Embrace AI as a partner, not a threat. Learn its intricacies and leverage its power.

With the use of the “Yield” that would have the tin hat brigade running to their conspiracy blogs :smiley:

In respect of what I have learned over the passed 21 days in regards to AI, I would go with the following, although not sure you would call this a manifesto. But I can see AI help achieve better Quality.

Everchanging world - Standing still gets you nowhere. We work in tech, stay ahead of the curve.

It takes a village - Encourage discussions and not disparage. Everyone has a part to play.

Rome wasn’t built in a day - but stood for a millennium. Build strong structures and do not let ego collapse them.

Safety is Paramount : ensure that your Employers and Clients are not open to compromise

AI is another Tool. A match cannot ignite itself, it needs us to know which end to run against what surface.

But these are all things we should be doing anyway

PS
 I wish I had known about the Test Exchange get together before committing to taking the in-laws out to dinner :smiley:

2 Likes

@simon_tomes
Hello hello!!

This challenge encourages testers to reflect on their relationship with AI and articulate their values and beliefs in a human-centric way.
By sharing and discussing our personal manifestos, we foster a sense of community, empathy, and mutual learning within the testing profession.

  1. Collaboration: Embrace AI as a collaborative tool, enhancing testing expertise rather than replacing it.
    Foster a culture where testers work alongside AI systems, leveraging each other’s strengths to achieve optimal testing outcomes.

  2. Explainability: Prioritize the ability to understand and explain AI outputs.
    It’s crucial to comprehend the reasoning behind AI-generated results to ensure transparency, identify potential biases, and make informed decisions about testing strategies.

  3. Ethics: Actively address ethical considerations in AI testing.
    This involves addressing bias, ensuring privacy protection, and promoting fairness in testing practices. Strive to mitigate biases in AI algorithms, uphold user privacy rights, and maintain fairness in testing processes.

  4. Continuous Learning: Commit to ongoing learning and staying informed about advancements in AI technology.
    This involves keeping up with the latest research, tools, and best practices in AI testing to adapt and improve testing strategies continually.

  5. Transparency: Advocate for transparency in AI testing tools and processes. Transparency fosters trust and accountability by enabling stakeholders to understand how AI is utilized in testing activities.
    Documenting AI algorithms, methodologies, and results promotes openness and facilitates collaboration.

  6. User-Centricity: Ensure that AI testing efforts prioritize the end-user experience and ultimately enhance software quality.
    Align AI testing strategies with user needs and expectations, leveraging AI to identify and address potential issues early in the development lifecycle.

Rationale:

These principles are crafted to guide a mindful and ethical approach to integrating AI into testing activities.
By prioritizing collaboration, explainability, ethics, continuous learning, transparency, and user-centricity, testers can harness the potential of AI while mitigating risks and ensuring that AI testing efforts contribute to delivering high-quality software that meets user expectations.

5 Likes

Rather than developing a manifesto, I think the key for me is 3 fold:

  1. AI is a useful tool and has some benefits and also constraints. It is not the hype that surrounds it.
  2. AI can provide prompts, tips and assistance that can help save me time.
    I’ve found ways of coding, assessing data and building outline test scenarios that can accelerate the process, but the results do need checking.
  3. AI is not human. We work in organisations that are emotive and hence AI can assist in decisions but cannot make the decisions.
  4. Keep researching. Finding evidence to support both the uses and limitations of AI is important.
3 Likes

My AI in Testing Manifesto

  • Use only AI tools approved by management
  • List the AI tools available in the Test Strategy
  • Review all data, test steps, and results provided by AI
  • Report the AI tools used in Test Summary Report
  • Continue learning AI usage in testing
5 Likes

Here is my AI in Testing Manifesto

  1. Use licensed AI tools approved by the company
  2. Provide Testers training material and continuous support by having an AI expert in the organization.
  3. Metrics and Measurement: Define clear metrics to evaluate AI testing effectiveness. Monitor key performance indicators (KPIs) related to test coverage, defect detection, and efficiency.
  4. Risk Assessment: Understand the risks associated with AI testing. Consider false positives, false negatives, and potential blind spots.
5 Likes

Hi, everyone,

please find my AI in Testing Manifesto:

Ethical AI Practices monitor and assimilate ethical considerations and regulatory changes in order to ensure the responsible use of AI systems, develop and regularly review guidelines for ethical standards.
Continuous learning be familiar with current technologies, progress in AI, collect feedback from team members, analyze performance metrics and also be interested in future trends.
Communication and collaboration communicate clearly and collaborate efficiently with all team parts in response to assure, that AI solutions are technical correctly and meets companies goals and needs.
Evaluation of results regularly monitor and evaluate the results of AI testing in order to improve this process and protect against potential errors, bias and ensure fairness.
Data protection and security develop AI use in a responsible and ethical way, work with local companies tools.

4 Likes

1. About Reflect on Key Learnings

Based on the previous 20 days of AI testing challenge tasks, a key Learnings is that, in addition to starting to accept and continually learn new AI testing tools, it’s also necessary to use AI testing tools with a critical mindset, especially commercial AI testing tools. After all, AI is a current hot topic, and many tools exaggerate their AI capabilities for added hype, which might not be very practical.

However, it’s undeniable that the underlying design principles of most tools’ AI functionalities can be referenced and applied to our daily testing activities.

2. About Consider Your Mindset

  • When using AI testing tools, it’s important to understand their underlying principles and learn better ways to use them.

3. About Craft Your Personal Set of Principles

  • Continuous Learning: There are many aspects in testing activities where efficiency and quality can be improved, and different AI testing tools might intervene at different points. Continuously understanding and learning about new AI testing tools can better adapt to testing activities in the AI era.

  • Learn More: When using AI testing tools, pay more attention to their underlying logic and principles, rather than just relying on the tools’ introductions.

  • Delay Judgement: Do not rush to make final evaluations and judgments on the results provided by AI testing tools. Make judgments after obtaining more information about the results.

  • Positive Attitude: Adopt a positive attitude to accept and adapt to testing activities in the AI era. Keeping up with the times ensures you won’t be replaced, as different eras have different types of testing activities.

  • Collaboration and Cooperation: When using AI testing tools, provide reasonable feedback on the results generated by AI, discuss with peers in online communities, and share experiences.

blog post link:30 Days of AI in Testing Challenge: Day 21: Develop your AI in testing manifesto | Nao's Blog

3 Likes

Hi all

These are my manifestos

  • Develop a policy/guideline at the organizational level regarding the utilization of AI tools.
  • Utilize AI as a supportive assistant.
  • Limit the utilization of sensitive data in prompts.
  • Continuously assess the results generated by AI tools.
  • Continuously learn the changes in AI World

Thanks
Vishnu

3 Likes

Hello All, I asked for Claude’s help coming up with this which feels a bit counterproductive for this particular task :face_with_peeking_eye: Anyway, I provided the manifestos as a reference, added my values - which are not different from all your awesome values, and asked AI to come up with a manifesto. Here is the chat:
You are a software QA tester that is assessing the use of AI tools within the testing process You need to write an AI in testing manifesto you reference is the following links: https://www.ministryoft - Poe

And here is my revised version:

AI in Testing Manifesto

We are uncovering better ways of leveraging artificial intelligence (AI) in software testing by collaborating with these tools and helping others do so. Through this work, we have come to value:

  • Human-AI Collaboration over Isolation
  • Responsible AI Integration over Blind Adoption
  • Continuous Learning over Static Methods
  • Ethical AI Practices over Unchecked Capabilities

That is, while there is value in the practices on the right, we value the principles on the left more.

Principles:

1. Human Oversight is Essential

  • AI tools should be treated as collaborative partners, not standalone decision-makers.
  • Human testers and AI should work together, leveraging their respective strengths.

2. Trustworthiness and Transparency

  • Prior to integration, AI systems must be thoroughly studied for their training data, models, and potential biases

  • We understand that AI is only as good as the data it learns from, and we actively seek transparency in its training process.
  • The capabilities, limitations, and decision-making processes of AI tools should be openly documented

3. Continuous Learning and Adaptation

  • AI capabilities evolve rapidly; testing processes must be flexible to adapt.
    Human testers should continuously learn about new AI advancements and their implications.
  • No individual should unilaterally decide on using AI. Decisions should involve cross-functional teams, including testers, developers, and business stakeholders.
  • We foster open discussions, share insights, and collectively evaluate the impact of AI on our testing processes.

4. Ethical and Responsible Practices

  • We uphold the highest standards of integrity when using AI. This includes ensuring data privacy, avoiding conflicts of interest, and maintaining transparency.
  • We actively address ethical concerns related to bias, fairness, and unintended consequences. Our goal is to build trust with stakeholders

By upholding these principles, we aim to harness the power of AI while preserving human expertise, ethical practices, and a commitment to delivering high-quality software that benefits society.

2 Likes

Namastey Everyone!

The manifestos shared in the task link were informative, so @simon_tomes thank you for always sharing such type of amazing sample works.
Considering my 20-day journey so far :

  1. Opportunities: To me, AI can bring out more growth opportunities for testers. We will not be limited or will not be stuck to where we were in the past 20 years. AI provides us with a platform for people who don’t like to code.
    Ex: During my sessions, I came across a tool called ‘Testcraft’ and I was thrilled to see how easily the test cases were being created and then I even shared the same with my office testing team and they had the same reactions too and thereby we created an agenda for everyone to research on new AI tools and try to implement them in the organization’s process.
  2. Potential Rockblocks: Everyone is new to AI, so to learn something in this area, resources would be less that’s why the initial self-work would be more.
  3. Good Practices: AI will help us in saving much of our efforts and time and will provide us with better quality results but only if we drive the AI how we want it to work and not the other way around.
  4. Transparency: The data security thought should be considered while using AI tools like these.
  5. The thought that AI might replace Testers is baseless.
  6. Understanding the flow behind the AI tool result generation is as important as using a right tool for right content testing.
2 Likes