Session Speakers
Name: Sangeetha Jayasankar
Role: Salesforce QA Lead
Session Topic: GitHub Copilot – Apex Guru Insights, Gemini – Understanding Test Case Workflows, User Story Preparation & Test Case Review
Session Date: 2025
Name: Janani Thiyagarajan
Role: Salesforce Technical Lead
Session Topic: GitHub Copilot – Apex Guru Insights, Gemini – Understanding Test Case Workflows, User Story Preparation & Test Case Review
Session Date: 2025
Name: Jeeva Arumugam
Role: Salesforce Developer
Session Topic: GitHub Copilot – Apex Guru Insights, Gemini – Understanding Test Case Workflows, User Story Preparation & Test Case Review
Session Date: 2025
As part of our ongoing Knowledge Sharing Series, multiple sessions were conducted on the same day to provide a holistic view of modern development practices, AI-assisted tools, and quality-driven delivery. The sessions collectively focused on improving developer productivity, enhancing testing efficiency, and strengthening requirement and test alignment.
The first session, led by Jeeva Arumugam, focused on GitHub Copilot – Apex Guru Insights. The speaker demonstrated how GitHub Copilot can be effectively used in Apex development to accelerate coding, generate logic suggestions, and improve overall development workflows while adhering to best practices.
The second session was delivered by Sangeetha Jayasankar on Gemini – Understanding Test Case Workflows. This session explained how test cases provided by Gemini are structured and how they work end-to-end. The discussion covered how Gemini interprets requirements, generates test steps, and maps them to expected results, helping improve testing efficiency and clarity.
The final session was conducted by Janani Thiyagarajan on User Story Preparation & Test Case Review. The session emphasized the importance of well-defined user stories, clear acceptance criteria, and thorough test case reviews to ensure complete requirement coverage and reduce rework.
Key Highlights of the Sessions:
- AI-assisted development using GitHub Copilot for Apex
- Understanding and validating Gemini-generated test cases
- Best practices for user story preparation
- Effective test case review techniques
- Aligning requirements, development, and testing
Participants actively engaged across all sessions through questions and discussions, making the knowledge-sharing experience interactive and insightful. The combined sessions provided valuable perspectives on leveraging AI tools responsibly while maintaining strong fundamentals in requirement analysis and quality assurance.
Outcome of the Sessions:
- Improved awareness of AI tools in development and testing
- Better understanding of automated test case workflows
- Stronger alignment between user stories and test cases
- Enhanced collaboration across development and QA teams
Overall, the sessions were well-received and added significant value to the Knowledge Sharing Series, reinforcing our commitment to continuous learning and adoption of best practices across the delivery lifecycle.
