Big Yellow Robotaxi

·

For a number of years now, public discourse has been haunted by a specter called Techlash. Capitalized so you know it’s a real thing, yet too vague and shifting to actually be defined, it’s one of those modern coinages that attempts to forge a sense of unity across our atomized society by keeping things at a strictly vibes level. Indeed, the term “Techlash” is only specific in the “lash” part, which betrays the fact that our efforts to come to terms with the outsized role that technology now plays in our lives largely consist of frustrated flailing.

Nowhere is this more clear than in the realm of autonomous vehicles, where debates over the technology and its role in society have been profoundly dysfunctional since the hype cycle first kicked off nearly a decade ago. Though mostly a sideshow for much of that time, due to the simple fact that the technology wasn’t even real, the confused debates about self-driving are becoming even more absurd now that robotic vehicles actually are a real thing. And thanks to the fact that three robotaxi companies chose San Francisco as a major testing or deployment area, the entire technology is being sucked into a context-shredding black hole of “Techlash” and culture war.

It makes perfect sense: autonomous vehicles are highly visible symbols of the high tech sector’s power, wielding the power of life and death (and traffic delays) in the public thoroughfare, with little to no oversight or regulation. Nor are the grievances against the robotaxis operating in San Francisco completely made up; there have been weird behaviors, crashes, delays, communication challenges with first responders and construction crews, and much more. Having been personally involved in helping AV companies push the message that they proactively work with local government and first responders to avoid precisely these kinds of issues while working for the industry’s educational nonprofit, I am as embarrassed and angry at these failures as anyone else.

That said, context is never more important than when one is angry and embarrassed, and this is a case where context really matters. After all, just because something seems like the perfect symbol of a massive and nebulous problem, doesn’t mean it’s the most deserving target of ones righteous indignation. Indeed, the revelation that San Francisco city officials overstated the safety problems that AVs present in their city illustrates the kind of overreach that takes hold when ill-defined rage finds its perfect symbolic target.

I saw a preview of this movie back in 2019, when the Washington Post’s Faiz Siddiqui reported on a growing anti-AV NIMBYism among Silicon Valley tech workers who feared that the fast-and-loose tactics they saw in their workplaces were coming for their public roads. As I noted in a column at the time, that fear was entirely appropriate… it was just aimed at the wrong target. I wrote:

“Misperceptions of risk are often rooted in aesthetic novelty more than anything else, drawing our attention to things that look startlingly unfamiliar while allowing more immediate but somehow familiar risks to fade into the background. Unsurprisingly, these concerned residents of Silicon Valley seem to have latched onto the Alphabet company Waymo’s unusual-looking autonomous test vehicles, which bulge with a variety of sensors and immediately stand out as members of an experimental test fleet . Meanwhile, the “sheer volume of Teslas on the streets” that Siddiqui only mentions in passing as evidence of The Valley’s willingness to adopt new technologies, pass by with the quiet anonymity of any other consumer vehicle.

 This dichotomy shows how badly risk can be misperceived, given the profound differences between how Waymo and Tesla approach risk and safety. This contrast extends from their overall approaches to autonomy and internal safety cultures to the designs of their technology stacks and on-road testing protocols. When armed with the full facts, the ubiquitous and anonymous Teslas turn out to be embodiments of the toxic culture that fuels “techlash” anxieties while the eye-catchingly unfamiliar Waymos reflect a reassuring culture of cautious safety.”

Think about this for a moment: when an autonomous vehicle drives by you in San Francisco, the very features that allow you to identify it as such are literally there for safety reasons. The radar bulges and lidar domes, in particular, provide levels of sensor redundancy and diversity that are necessary to even approach robustly safe performance. Similarly, the cautious and methodical driving style of an AV in a city like San Francisco (which, it must be said, is light years more assertive and naturalistic than it was just a few years ago) both assures safety, and draws attention to these rolling symbols of high tech hegemony. It is the marks of good citizenship, that make AVs such an scapegoat.

Meanwhile, the one company that has simply chosen not to use these safety-assuring technologies or engage with regulators of any kind, and which has pushed the most cynical and dangerous driving automation systems in the market, continues to fly completely under the radar. Essentially nothing has changed since 2017, when I wrote

“Musk has repeatedly expressed the desire to push automated driving software onto public roads as fast as possible in order to reduce the number of road fatalities caused by human error and argued that anyone critical of resulting crashes is “killing people” by dissuading them from adopting the technology. This argument for putting as many vehicles on public roads as possible to speed up neural net learning was condensed into a breathtakingly crude utilitarian argument by his acolyte and defender Lex Fridman, who argued that “we’re going to have to be more forgiving of a car causing an fatality” and said that if an AV accelerated into a crowd of people it would be justified if it lead to a decrease in human-caused fatalities.

This is precisely the kind of toxic tech culture that the “techlash” is rightly focused on: pushing immature technology onto public roads as fast as possible, and cavalierly endangering lives based on the hope that it might someday save more lives than were lost during development. It’s precisely the attitude that led to the fatal crash of an Uber autonomous test vehicle in Tempe, AZ last year, prompting the entire sector to rethink the whole notion of a “race to autonomy” and beef up their safety cultures, particularly in the context of public road testing. In the “trough of disillusionment” that has followed this tragedy, many AV developers have pushed back their timelines and doubled down on safety with the understanding that another fatality could prompt precisely the kind of backlash we see in Siddiqui’s story.”

I want to be clear here: Tesla’s totally unique and indefensibly unsafe approach to driving automation does not absolve the Waymos and Cruises of the world of their failures, or their need to become better citizens of the cities where they operate. That said, getting outraged about a few traffic jams, and making the highly visible AVs that contribute to them symbols of Big Tech’s lack of accountability is a profound mis-assessment of the risks in the AV sector. These traffic jams are the kinds of things that, if handled responsibly, should be held up as a reasonable cost of developing this technology on public roads. The three deaths that thorough NTSB investigations have definitively tied to Tesla’s Autopilot design, not to mention the countless other injuries and nonfatal crashes that Tesla’s reckless approach to this technology has contributed to, are not.

Don’t cry for the responsible AV developers though, because this is a situation of their own making. Every serious leader in this sector is well aware of that Tesla is a dramatic outlier, whose entire approach to the tech is fundamentally dangerous and fraudulent, but has chosen to stay silent as Tesla’s lies and bluster dominate public perceptions of their work. Making the right choices and hoping the public eventually stops punishing them for it, while Tesla makes all the wrong choices and evades any form of consequences, hasn’t worked for a very long time now. I understand why they don’t want to tangle with Musk and Tesla and their horde of mindless, abusive touts, probably better than anyone, but that is precisely the cost of leadership in this sector and there is no other route to it.

For the rest of us, the lesson here is that we need to understand the “tech” better in order to do more than simply “lash” out at it. Lashing means striking out, and right now that’s what we’re doing: overwhelmed by the power and trajectory of technology, we are flailing at the most visible and symbolically resonant avatars of what we imagine “tech” to be. Though momentarily satisfying, lashing out rarely solves problems because it is fundamentally an imprecise and emotional outburst. At best, the “Techlash” is a foundation-level vibe that can unite a lot of people for a lot of reasons, but it needs well-informed and well-targeted applications to bring us closer to a better world.

2 responses to “Big Yellow Robotaxi”

  1. When, on my next travels, a “robotaxi” can drive me from Kokand to Bukhara, or on a Friday at aperitivo hour in Milan, then wake me up.

    Like

  2. The divide between visible and invisible experimentation with AVs evokes biology and the parasite/host mechanism. It is almost as if Tesla is playing the role of a cuckoo, using the premise of electric luxury autonomy to implant its experimental technology on our roads – their evolutionary success defined by the sale of ever more predatory devices.

    But, of course, one doesn’t need to buy a Tesla to infect their car’s piloting system with a pixel matrix multiplier indifferent to the actual presence of pedestrians or stop signs. Comma’s offerings can transmute any warbler into a Cuculus while looking no odder than a dashcam.

    Like the host species who learned a fear of smashing obviously parasitic eggs to discourage cuckoo parents returning to destroy the host clutch, are regulators still too afraid of economic retaliation to reject the FSD experiment? If they’re already this complacent, what mistakes can we expect them to ignore in 20, 40 years?

    Like

Leave a comment

Enter your email address to subscribe to this blog and receive notifications of new posts by email.