【Blonde | Adult Movies Online】

2025-06-26 10:19:21 955 views 53186 comments

Scientists at MIT's LabSix,Blonde | Adult Movies Online an artificial intelligence research group, tricked Google's image-recognition AI called InceptionV3 into thinking that a baseball was an espresso, a 3D-printed turtle was a firearm, and a cat was guacamole.

The experiment might seem outlandish initially, but the results demonstrate why relying on machines to identify objects in the real world could be problematic. For example, the cameras on self-driving cars use similar technology to identify pedestrians while in motion and in all sorts of weather conditions. If an image of a stop sign was blurred (or altered), an AI program controlling a vehicle could theoretically misidentify it, leading to terrible outcomes.

The results of the study, which were published online today, show that AI programs are susceptible to misidentifying objects in the real-world that are slightly distorted, whether manipulated intentionally or not.

SEE ALSO: After getting shade from a robot, Elon Musk fires back

AI scientists call these manipulated objects or images, such as turtle with a textured surface that might mimic the surface of a rifle, "adversarial examples."

"Our work demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought," the scientists wrote in the published research.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

The example of the 3D-printed turtle below proves their point. In the first experiment, the team presents a typical turtle to Google's AI program, and it correctly classifies it as a turtle. Then, the researchers modify the texture on the shell in minute ways — almost imperceptible to the human eye — which makes the machine identify the turtle as a rifle.

The striking observation in LabSix's study is that the manipulated or "perturbed" turtle was misclassified at most angles, even when they flipped the turtle over.

To create this nuanced design trickery, the MIT researchers used their own program specifically designed to create "adversarial" images. This program simulated real-world situations like blurred or rotating objects that an AI program could likely experience in the real-world — perhaps like the input an AI might get from cameras on fast-moving self-driving cars.

With the seemingly incessant progression of AI technologies and their application in our lives (cars, image generation, self-taught programs), it's important that some researchers are attempting to fool our advanced AI programs; doing so exposes their weaknesses.

After all, you wouldn't want a camera on your autonomous vehicle to mistake a stop sign for a person — or a cat for guacamole.


Featured Video For You
Walmart is testing self-scanning robots

Topics Artificial Intelligence Google

Comments (97851)
Upward Information Network

Creator job opportunities grew 7x in recent years [April 2025]

2025-06-26 09:40
Progressive Information Network

Raducanu vs. Alexandrova 2024 livestream: Watch Wimbledon for free

2025-06-26 08:47
Exquisite Information Network

Tour de France 2024 livestream: How to watch Tour de France for free

2025-06-26 08:10
Life Information Network

Gauff vs. Dolehide 2024 livestream: Watch Wimbledon for free

2025-06-26 07:45
Impression Information Network

Contingent No More

2025-06-26 07:44
Search
Newsletter

Subscribe to our newsletter for the latest updates.

Follow Us