LOGIN
התחברות או הרשמה
Avatar
להמשך הרשמה ידנית – לחץ על כפתור ההרשמה, להרשמה/כניסה מהירה בעזרת חשבון רשת חברתית – לחץ על הלוגו בכותרת

אפס סיסמה - שכחתי את שם המשתמש

שם משתמש
סיסמה
זכור אותי

he icon   en icon

עליכם להיכנס כמשתמשים רשומים בכדי להשתתף בפעילויות הקהילה - ההרשמה/כניסה מתבצעת מכותרת האתר.

הקהילה

סרטוני וידאו

חדשות מעולם הבדיקות

  • Glad I wasn’t the QA Engineer here!

    According to the NTSB’s analysis of a recent incident where a self-driving Uber car struck and killed a pedestrian: A radar on the modified Volvo XC90 SUV first detected Herzberg roughly six seconds before the impact, followed quickly by the car’s laser-ranging lidar. However, the car’s self-driving system did not have the capability to classify an object as a pedestrian unless they were near a crosswalk. For the next five seconds, the system alternated between classifying Herzberg as a vehicle, a bike and an unknown object. Each inaccurate classification had dangerous consequences. When the car thought Herzberg a vehicle or bicycle, it assumed she would be travelling in the same direction as the Uber vehicle but in the neighboring lane. When it classified her as an unknown object, it assumed she was static. Worse still, each time the classification flipped, the car treated her as a brand new object. That meant it could not track her previous trajectory and calculate that a collision was likely, and thus did not even slow down. Tragically, Volvo’s own City Safety automatic braking system had been disabled because its radars could have interfered with Uber’s self-driving sensors. By the time the XC90 was just a second away from Herzberg, the car finally realized that whatever was in front of it could not be avoided. At this point, it could have still slammed on the brakes to mitigate the impact. Instead, a system called “action suppression” kicked in. This was a feature Uber engineers had implemented to[…]

    13.11.2019 | 9:07 קרא עוד...
  • Streaming Kafka topic to Delta table (S3) with Spark Structured Streaming

    Streaming a Kafka topic in a Delta table on S3 using Spark Structured StreamingAt Wehkamp we use Apache Kafka in our event driven service architecture. It handles high loads of messages really well. We use Apache Spark to run analysis and machine learning.When I work with Kafka, the words of Mark van Gool, one of our data architects, always echo in my head: “ Kafka should not be used as a data store!” It is really tempting for me to do so, but most of the event topics have a small retention period. Our data strategy specifies that we should store data on S3 for further processing. Raw S3 data is not the best way of dealing with data on Spark, though. In this blog I’ll show how you can use Spark Structured Streaming to write JSON records on a Kafka topic into a Delta table.Note: This article assumes that you’re dealing with a JSON topic without a schema. It also assumes that the buckets are mounted to the file system, so we can read and write to them directly (without the need for boto3). Also: I’m using Databricks, so some parts are Databricks-specific.DesignTo make things easier to understand, I’ve made a diagram of the setup we’re trying to create. Let’s assume we have 2 topics that we need to turn into Delta tables. We have another notebook that consumes those delta tables.Each topic will get its own Delta table in its own bucket. The topics are read by parametrised jobs that[…]

    13.11.2019 | 8:21 קרא עוד...
  • Getting High Coverage Regression Tests Quickly (Part 2): Improving test coverage using approvals

    This is the second of three blog posts in which I talk about what I learnt while attending Emily Bache’s workshop ‘Getting High Coverage Regression Tests Quickly’. This was a half day workshop that took place at Test Bash Manchester on 2nd October 2019. Before reading this blog post, I’d recommend reading part 1 first. In my previous blog post, we looked at the use of Approvals to validate the results of automated tests. An approval test was used to replace a large number of asserts. The approval test takes a snapshot of the test output, and compares this with a previous snapshot. This shows the developer if anything has changed in the application. In this blog post, the approval test will be adapted so that there is an increase in the test coverage. For this exercise, I used Visual Studio with the ‘dotCover’ and ‘Resharper’ plugins. Getting more coverage With the test setup the way it was, we were already achieving 70% code coverage. The aim of this exercise was to increase this to 100%. If we look at the code, the lines which are covered in the tests are marked in green. When looking at the shopping cart class, I saw that there were 2 methods for adding items to the cart – AddItem (which only adds a single named product) and AddItemQuantity (which adds a specified quantity of the named product). So I added an additional line of code to the original test. This added a single[…]

    13.11.2019 | 2:30 קרא עוד...

טיפים

  • מבט טרי מביא באגים טריים
    מבט טרי מביא באגים טריים מבט טרי מביא באגים טריים באופן טבעי אנו כבודקים מתמחים בעבודה עם המוצר שאותו אנו בודקים, בהרבה מקרים זהו יתרון (אנו יכולים לזהות בעיות, לעבוד מהר יותר וכן הלאה), אך לעיתים זהו חיסרון – אנו מפתחים…
    קרא עוד...
  • בודק - למד לשאול – Learn to Question
    בודק - למד לשאול – Learn to Question  בודק - למד לשאול – Learn to Question - Tony Bruce – חלק ניכר מעבודת הבודק כרוכה באיסוף מידע לגבי המערכת, התכונה או הנושא הנבדק.במהלך איסוף המידע נתקל במידע רב המגיע מגורמים שונים, וכולל הנחות אותן…
    קרא עוד...
לרשימה המלאה >>