LOGIN
התחברות או הרשמה
Avatar
להמשך הרשמה ידנית – לחץ על כפתור ההרשמה, להרשמה/כניסה מהירה בעזרת חשבון רשת חברתית – לחץ על הלוגו בכותרת

אפס סיסמה - שכחתי את שם המשתמש

שם משתמש
סיסמה
זכור אותי

he icon   en icon

עליכם להיכנס כמשתמשים רשומים בכדי להשתתף בפעילויות הקהילה - ההרשמה/כניסה מתבצעת מכותרת האתר.

הקהילה

cover photo

עידן

שלח מסר
הוסף כחבר
עידן

אודותי

המשתמש לא שיתף מידע אישי
חבר מתאריך
חמישי, 29 מרס 2018 20:34
האחרון שהיה און ליין
אף פעם לא התחבר

חברים

אין חברים כרגע

קבוצות

לא הצטרפת לקבוצות עדיין.

חדשות מעולם הבדיקות

  • ATA Meetup #22 - Bangalore - Amazing experience

    ATA Meetup #22 - Bangalore - Amazing experience Reached super earlyThe session was supposed to start at 9 AM and I reached by 7.45 AM. I did not want to be late. Due to weekend's minimalistic traffic and super driver, I surprised myself and I thought I can just enter and wait in the hall. The security asked me the contact person name and I told him that there is a meetup by Agile Testing Alliance - did not help. I called up Aditya Garg and somehow the security got convinced that I can at least pass the main barricade and sit on the makeshift park seats.It was nice to experience fresh air, have fruits and dive into an interesting book called "The Practicing Mind" by Thomas M. Sterner. The Practicing Mind I remembered the discussions with Shrini Kulkarni about consciousness, mind, awareness as I read the book. Around 8.40 AM, Thrivikram and Venkata P from HCL welcomed and escorted me to the induction hall where we had the meetup. The conversation between them and the security folks was an interesting one making me think of the process adherence vs value addition. Learning for me: Know the contact person in advance and keep them informed about surprises in plan. HCL ServicesThe first session was by HCL management represented by Prashantha M who highlighted the various services offered by HCL, the case studies and the learning. There were few really good questions by the audience who wanted to know more details about the insights shared to them.My tip: Knowing[…]

    25.05.2019 | 11:55 קרא עוד...
  • Performance testing (benchmarking) Java code with JMH

    Performance testing (benchmarking) Java code with JMH Contents:1) Introduction2) Is it easy?3) Common pitfalls4) Setup5) How to configure JMH?6) Configuration options7) Configuration - predefining state8) Demo9) Results10) Further reading1. IntroductionAs test engineers when we approach performance testing we usually only think about final end-to-end application verification with tools such as JMeter, Locust or Gatling. We know that such tests should run on a separate environment with conditions resembling production as close as possible. Unfortunately in some cases (especially with monolithic architecture) dedicated performance testing environment is hard to get. What to do in such cases? Should we test on common test environment? Or should we test on production? Or maybe we should change our approach to performance testing?  Each option has advantages and disadvantages.Today I'd like to describe low-level performance testing (often called benchmarking) of Java code. It does not require a separate environment. It can be executed directly from your IDE (although that's not recommended) or from the command line. Measuring the performance of critical pieces of code is essential for everyone who creates applications, frameworks, and tools. Testers are co-creators so it's also our responsibility. 2) Is it easy?Benchmarking correctly is hard. There are multiple optimizations implemented on the JVM/OS/hardware side which make it challenging. In order to measure right, you need to understand how to avoid those optimizations because they may not happen in the real production system. Thankfully, there is a tool which helps you mitigate those issues called JMH (Java Microbenchmark Harness). It was created for building, running, and analyzing nano/micro/milli/macro benchmarks written in Java[…]

    25.05.2019 | 8:10 קרא עוד...
  • Looking at Observability

    Looking at Observability The Test team book club is reading Guide: Achieving Observability from Honeycomb, a high-level white paper outlining what observability of a system means, why you might want it, and factors relevant to achieving and getting value from it.It's not a particularly technical piece but it's sketched out to sufficient depth that our conversations have compared the content of the guide to the approaches taken in some of our internal projects, the problems they present, and our current solutions to them.While I enjoy that practical stuff a great deal, I also enjoy chewing over the semantics of the terminology and making connections between domains. Here's a couple of first thoughts in that area.The guide distinguishes between monitoring and observability. monitoring: "Monitoring .. will tell you when something you know about but haven't fixed yet happens again" and "... you are unable to answer any questions you didn’t predict in advance. Monitoring ... discard[s] all the context of your events". observability: "Observability is all about answering questions about your system using data", "rapidly iterate through hypothesis after hypothesis" and "strengthen the human element, the curiosity element, the ability to make connections." I don't find that there's a particularly bright line between monitoring and observability: both record data for subsequent analysis and whether the system is (appropriately) observable depends on whether the data recorded is sufficient to answer the questions that need to be asked of it. I think this maps interestingly to conversations around checking and testing and the intent of the data[…]

    25.05.2019 | 4:48 קרא עוד...

טיפים

לרשימה המלאה >>