LOGIN
התחברות או הרשמה
Avatar
להמשך הרשמה ידנית – לחץ על כפתור ההרשמה, להרשמה/כניסה מהירה בעזרת חשבון רשת חברתית – לחץ על הלוגו בכותרת

אפס סיסמה - שכחתי את שם המשתמש

שם משתמש
סיסמה
זכור אותי

he icon   en icon

בכדי לכתוב בפורום יש להרשם או להתחבר - ההרשמה/כניסה מתבצעת מכותרת האתר.  

ברוך הבא, אורח
שם משתמש: סיסמה: זכור אותי
כאן הינכם מוזמנים לתת משוב לגבי התנהלות הפורום ואתר ITCB, שיטות העבודה, בחינות וכן הלאה.
אנו כאן לרשותכם, במטרה לשפר את השיח בעולם הבדיקות בארץ, ולהתמקצע.

סקר שאלת ניסיון #2 - אז למה אנחנו בודקים לפני שמבצעים?

ניסיון #2-1 1 100%
ניסיון #2-2 אין הצבעות 0%
ניסיון #2-3 אין הצבעות 0%
מספר הצבעות 1 ( halperinko )
רק משתמשים רשומים יכולים להצביע בסקר

נושא שאלת ניסיון #2 - אז למה אנחנו בודקים לפני שמבצעים?

שאלת ניסיון #2 - אז למה אנחנו בודקים לפני שמבצעים? 01 יול 2014 10:52 #1299

  • halperinko
  • halperinko's Avatar
  • מנותקים
  • Administrator
  • פוסטים: 836
  • תודות שהתקבלו 35
  • קארמה: 3
שאלת ניסיון #2
בלה בלה בלה....

Part of the message is hidden for the guests. Please log in or register to see it.

Part of the message is hidden for the guests. Please log in or register to see it.

בעיה נוספת - סדר השאלות שמוכנסות הנו הפוך (צריך להתחיל מהשנייה)
כמו כן אני מעריך כי כל תגובה שתרשם תשנה את סדר השאלות.
(שלא לדבר על פוסטים אחרים שעלולים להכנס באמצע)
יש להרשם בכדי לכתוב בפורום.

חדשות מעולם הבדיקות

  • Using Retries in tests can hide the bugs

    Using Retries in tests can hide the bugs We are quite familiar with the concept of randomly failing automated tests. Those are the tests that even though there is no change in the feature they are testing, they either fail randomly at the same step, or they fail at random steps. Handling the results of such tests can be tricky, and some teams choose to simply retry a test if it failed. But is that the best option? Here are my thoughts. First of all, we need to ask ourselves why these tests are failing randomly. Here are a few possible reasons: the test environment is unreliable. Too often a test environment does not have enough hardware resources to work properly under the load our automation generates. Or, it could be configured incorrectly. we are not using waits (if we are talking about Selenium tests). The test itself is not properly written to account for asynchronous events that take place in the UI we are testing. In some cases the use of Javascript is making it harder on our tests to be reliable. In order to have a green test results report after the tests ran, a retry mechanism is often put in place. It can re-run the failing tests either only once, or a chosen number of times. However this can hide the fact that the tests did really fail for a reason, and the reason was that there is a bug in the system. Because the test failed at the first run, but could pass at[…]

    14.04.2021 | 1:20 קרא עוד...
  • Five Blogs – 14 April 2021

    The (best) five blogs we can read today. Check them out. Catch 22 and The Kobayashi Maru Written by: Steve Keating Pipelines as code… not text Written by: Beastmode Achieve More with Less: Pareto’s Principle in Software Testing Written by: Prashant Hedge The thrill of testing Written by: Paul Seaman Is Critical Thinking Dead? Written by: Randy Gage Quote of the day: “The present changes the past. Looking back you do not find what you left behind.” -Kiran Desai You can follow this page on Twitter

    13.04.2021 | 11:22 קרא עוד...
  • What I learned from Practitest’s State of Testing Report 2021

    What I learned from Practitest’s State of Testing Report 2021 Look at the rules, not the exceptions Now in its 8th year, the SOT Report provides testers with some valuable trend-based information on all things testing. The full report in all of its chart based glory was delivered today, and is well worth bookmarking. You can read the full report here. Sometimes, when attending meetups, conferences or reading online articles, the loudest voices are often those with the most exceptional experiences. But we may discover this fact long after our own Imposter Syndrome has reprimanded us for not living up to their ideals. We may find ourselves wondering “I’ll never be a proper tester, I don’t even write unit tests” or “I’ve never done test coaching/worked on IoT technology/done BDD/shifted left/[insert plethora of missing skills]” so is there even a future for me in this industry? Stats tend to be more accurate at revealing general trends. As I did at my Testbash Manchester talk, rather than focusing on the exceptions, I want to pull out some of the rules. How the majority of people who consider themselves to work in Testing define what they do, what they call themselves, and how they work. Nope, its not sexy. But it is reassuring to learn that out of all the responses:- 28% are known as “Test/QA engineers”, only 0.89% Test Coach and 2.14% are SDETS.74% and 60% test Web and Mobile, only 9% IoT and 18% Big Data.92% work in Agile environments, only 27% use BDD.75% have tasks that involve Test Automation[…]

    13.04.2021 | 2:14 קרא עוד...

טיפים

לרשימה המלאה >>