כאן הינכם מוזמנים לתת משוב לגבי התנהלות הפורום ואתר ITCB, שיטות העבודה, בחינות וכן הלאה.
אנו כאן לרשותכם, במטרה לשפר את השיח בעולם הבדיקות בארץ, ולהתמקצע.
Following is a share Avi has performed (I don’t know in which manner (from within our site, or by placing a link on LinkedIn site updates)),
But as you can see – the header is from a forum post
(at least it links to www.itcb.org.il/index.php?option=com_kun...=6&id=507&Itemid=632 )
But when I open the blog – there no such item which includes both name & description.
נושא בעייתי - נראה כי חלק מהאפליקציות שלוקחות Share - לוקחות מידע מעורבב מאיזורים שונים בדף, כולל מתוך הבלוקים שמשמאל.
רצוי מאוד לשפר - כי איננו יכולים אפילו לכתוב תיאור אחר במקום האוטומטי.
יש להרשם בכדי לכתוב בפורום.
Mixed share from forum post
17 אוק 2013 20:05 #957
We are quite familiar with the concept of randomly failing automated tests. Those are the tests that even though there is no change in the feature they are testing, they either fail randomly at the same step, or they fail at random steps. Handling the results of such tests can be tricky, and some teams choose to simply retry a test if it failed. But is that the best option? Here are my thoughts.
First of all, we need to ask ourselves why these tests are failing randomly. Here are a few possible reasons:
the test environment is unreliable. Too often a test environment does not have enough hardware resources to work properly under the load our automation generates. Or, it could be configured incorrectly.
we are not using waits (if we are talking about Selenium tests). The test itself is not properly written to account for asynchronous events that take place in the UI we are testing. In some cases the use of Javascript is making it harder on our tests to be reliable.
In order to have a green test results report after the tests ran, a retry mechanism is often put in place. It can re-run the failing tests either only once, or a chosen number of times. However this can hide the fact that the tests did really fail for a reason, and the reason was that there is a bug in the system. Because the test failed at the first run, but could pass at[…]
The (best) five blogs we can read today. Check them out.
Catch 22 and The Kobayashi Maru
Written by: Steve Keating
Pipelines as code… not text
Written by: Beastmode
Achieve More with Less: Pareto’s Principle in Software Testing
Written by: Prashant Hedge
The thrill of testing
Written by: Paul Seaman
Is Critical Thinking Dead?
Written by: Randy Gage
Quote of the day:
“The present changes the past. Looking back you do not find what you left behind.” -Kiran Desai
You can follow this page on Twitter
Look at the rules, not the exceptions
Now in its 8th year, the SOT Report provides testers with some valuable trend-based information on all things testing. The full report in all of its chart based glory was delivered today, and is well worth bookmarking. You can read the full report here.
Sometimes, when attending meetups, conferences or reading online articles, the loudest voices are often those with the most exceptional experiences. But we may discover this fact long after our own Imposter Syndrome has reprimanded us for not living up to their ideals. We may find ourselves wondering “I’ll never be a proper tester, I don’t even write unit tests” or “I’ve never done test coaching/worked on IoT technology/done BDD/shifted left/[insert plethora of missing skills]” so is there even a future for me in this industry?
Stats tend to be more accurate at revealing general trends. As I did at my Testbash Manchester talk, rather than focusing on the exceptions, I want to pull out some of the rules. How the majority of people who consider themselves to work in Testing define what they do, what they call themselves, and how they work.
Nope, its not sexy. But it is reassuring to learn that out of all the responses:-
28% are known as “Test/QA engineers”, only 0.89% Test Coach and 2.14% are SDETS.74% and 60% test Web and Mobile, only 9% IoT and 18% Big Data.92% work in Agile environments, only 27% use BDD.75% have tasks that involve Test Automation[…]
אם נתקלת בבאג במקרה... או שחזרה תקלה מלקוח - חפש באגים דומים, סביר להניח שפיקששת סדרה שלמה של באגים מאותו סגנון. בתרגום חופשי מהמסמך הבא של Cem Kaner. ראה המסמך בלינק הבא, כמו גם רשימת Checklists נוספים…
אל תבזבזו שעות וימים על הכנת דו"ח שבועי - השתמשו ב- Dashboard. נצלו המידע שכבר קיים במערכת דיווח הבאגים ומערכת ניהול הבדיקות, וודאו שה- dashboard נגיש לאנשים הרלוונטיים בקלות ומציג מידע עדכני.