As if we didn’t have enough to make us feel utterly anxious, computer scientists from Israel’s Bar Ilan University recently discovered a way to remotely seize control of self-driving cars and turn them from a marvel of effortless convenience into a darting, unruly weapon.

While most researchers focused on machine learning are busy devising ways to narrow down potential misunderstandings and mistakes, Yossi Keshet and his students do the opposite: Called “Adversarial Examples,” their niche discipline focuses on discovering new and intricate ways to overwhelm artificial intelligence.

It’s a far more deliberate process than it sounds. Starting out with a specific end goal in mind—making a self-driving car plow into pedestrians, say—Keshet and his doctoral student, Yossi Adi, produced precise mathematical calculations to determine the machine’s learning pattern. Then, they introduced “digital noise,” tiny elements that reconfigure the system’s calculations. If a self-driving car, using its camera, clearly identifies the pedestrians crossing the street right in front of it, for example, hackers, Keshet’s team has proven, can digitally add just a small smudge to the image, which—because the machine, unlike a human, analyzes the image as a barcode and breaks it down to fixed elements in any given space—then makes the car “unsee” those people right in front of it. Keshet and Adi will present their findings later this year at the prestigious NIPS artificial intelligence conference in California.

“In our reality today, we make increasing use of machine learning systems, which occupy a larger and larger part of our lives,” Keshet told Haaretz, adding that the implications of his new discoveries “could be disastrous, especially for safety and security systems, but also in matters of privacy, which is why it’s a problem demanding the immediate attention of all involved.”