
Autonomous Taxis Mimic Human Behavior on San Francisco Streets: Unsettling Consequences
Driverless Cars Mimic Human Impatience, Raising Safety Questions
[Image: A Waymo creeping forward at a crosswalk before a pedestrian finishes crossing.]
Caption: Waymo’s autonomous vehicle begins moving before a pedestrian fully crosses, displaying human-like impatience.
Self-driving cars, particularly Waymo’s robotaxis, are increasingly exhibiting human-like behaviors such as impatience, signaling advancements—and potential concerns—in artificial intelligence. University of San Francisco Professor William Riggs observed this during a ride-along with a San Francisco Chronicle reporter, noting a Waymo inching forward at a crosswalk before the pedestrian had fully exited the road. While subtle, this “rolling start” mirrors how human drivers might act, marking a shift from the rigid, rule-following behavior these vehicles are known for.
Balancing Safety and Assertiveness
Waymo, which operates in Phoenix, San Francisco, and Los Angeles (with plans to expand to Austin, Atlanta, and Miami), prides itself on safety but has steadily tweaked its algorithms to improve efficiency. David Margines, Waymo’s product management director, explained that human trainers aim to balance strict traffic compliance with timely journeys. Assertiveness, he argues, makes the cars “more predictable” in real-world traffic. For example, a Waymo recently honked at a car that cut it off—a deliberate, human-like response programmed to enhance communication with other drivers.
[Image: Waymo vehicle navigating a busy urban intersection.]
Caption: Autonomous vehicles are programmed to blend into traffic while prioritizing safety.
Safety Concerns Persist
Despite advancements, challenges remain. Waymo has been involved in 696 reported incidents since 2021 (not all its fault), including a tragic case where a robotaxi struck and killed an off-leash dog undetected by sensors. Critics question whether mimicking human impatience could lead to riskier decisions. “The more these cars act like humans, the more they might adopt our flaws,” one expert noted.
Tesla’s Robotaxi Ambitions Face Hurdles
Meanwhile, Tesla’s plans to launch its “Cybercab” robotaxi service in Austin this month hit a snag. The National Highway Traffic Safety Administration (NHTSA) demanded details on how Tesla’s Full Self-Driving (FSD) system—linked to four crashes and a pedestrian incident—will ensure safety. Tesla must respond by June 19 or risk delays.
[Image: Tesla’s proposed Cybercab concept.]
Caption: Tesla’s Cybercab aims to offer autonomous rides, but regulatory scrutiny looms.
Elon Musk, who previously promised “millions of robotaxis” by 2020, remains optimistic, claiming Tesla will produce “fully autonomous” vehicles by 2025. However, the NHTSA’s ongoing probe into FSD’s performance in poor visibility conditions casts doubt on timelines.
The Road Ahead
As self-driving technology evolves, so do ethical and safety debates. Waymo’s adaptive driving shows progress, but blending human intuition with AI decision-making remains complex. For Tesla, regulatory approval is the next hurdle. While autonomous cars promise safer roads, their integration hinges on balancing innovation with accountability.
[Image: NHTSA officials reviewing Tesla’s FSD data.]
Caption: Regulators are scrutinizing Tesla’s autonomous tech before approving its robotaxi rollout.
In the race to dominate the driverless future, striking the right balance between human-like adaptability and machine precision will define success—and public trust.