Recently, California legislators introduced the B.O.T. Act of 2018, which, as the Electronic Frontier Foundation summarizes, “would make it unlawful for any person to use a social bot to communicate or interact with natural persons online without disclosing that the bot is not a natural person.” The EFF describes the proposed bill as an understandable but over-broad response to Russia’s use of bots to influence the 2016 election as well as the use of spambots to commit online fraud. While it is tempting to accuse the EFF of defending bot rights, they haven’t quite taken that position, though they do come close.
As would be expected, the EFF alleges that the California bill can “chill the use of bots for protected speech activities,” and although the post written by Jamie Williams alludes to some interesting areas to explore vis-a-vis bots and speech, one important flaw in her summary of the bill is that it leaves off an important condition that says, “with the intention of misleading.” There may be circumstances in which intentionally misleading consumers, constituents, fans, voters, etc. can be considered protected speech, but it seems reasonable to assume that most actors who intentionally mislead are doing something harmful, and probably illegal.
It is characteristic of the EFF to trivialize a legitimate problem by imagining hypothetical negative consequences of the legislation proposed to address that problem. Even the couple of Twitter-feed examples Williams cites* as speech that may be chilled do not appear to be bot uses that would necessarily run afoul of the California law.
The first of theses is @soft_focuses, which is essentially a bot-generated version of fridge-magnet poetry. The second, rather interesting, example is @censusAmericans, which interprets anodyne census data and turns lines of information into “real” people. So, a typical tweet says, “I live with my father. He works. I speak German at home. I have never been married.”
If indeed both of these examples are protected speech, neither appears to “intentionally mislead” anyone. To the contrary, both the bot-generated poetry and the bot-generated census characters seem to be a) obviously the “speech” of bots; and/or b) harmless to the rare viewer who might somehow mistake either as the expression of a natural person. Correspondingly, even if either feed were required to more explicitly “label” its use of bots, the speech in question would not be in anyway diminished.
Do Bots Have Free Speech Rights?
I certainly hope we never come to the conclusion that they do. But a distinction I would make between the two examples presented by EFF is that the poetry Twitter account, although owned by a human (or humans), does not appear to communicate much protectable speech at the direction of that human. It simply produces random combinations of words that sound kinda like poetry.
Assuming that is correct, each tweet is a an example of purely bot-generated content, which should not be protected because machines do not have natural rights. One could argue that the human’s decision to present the whole twitter feed constitutes an artistic statement in itself, though not a very original one and not one that would likely differ in character from another feed doing exactly the same thing. Hence, the amount of protected speech would seem to be very thin and, therefore, not likely to be infringed by the California proposal. (This dovetails with the discussion of AI’s owning copyrights.)
By contrast, although each tweet in the census example may be partly the result of data-interpretation by a bot, the output is not random words. In fact, human authors have clearly set certain rules like the imposition of the pronoun “I” to generate first person statements as well as the basic subject-verb-object structure of English sentences. The cumulative result is a mosaic of fictional characters that represents real Americans in a Twitter-only narrative, not unlike the way in which characters in a movie or play represent real people. Thus, the owner of @censusAmericans is the natural person exercising a free speech right by presenting this collage to the public, which constitutes creative and politically-substantive speech.
Bot Speech is the Least of Our Worries
Beyond sci-fi wish-fulfillment, I’m not sure why it is necessary or beneficial, in many cases, to want bots to behave more like humans in the first place. Granted, I’m not bringing a smart device into my home like an Alexa or a Duplex because I’ve read my Huxley, Orwell, and Bradbury; but if I did own such a device, I’d want a giant wall of separation between me and the machine, lest I find myself locked out of the house one day and the thing telling me why it can’t “afford to jeopardize the mission.” (It’s bad enough when the toast pops up too late.)
Meanwhile, as the EFF opposes what amounts to a consumer-protection bill on highly-speculative free speech grounds, I have to say that, at present, I’m more concerned with humans behaving like bots than the other way around. Let’s face it, every time one of us clicks “Like” or shares a post or article based solely on the headline, we’re pretty much doing bot-work. The right keywords appear in front of our little sensors, and CLICK!—we pass it on to our circles of bots, who pass it on to their circles of bots.
And that doesn’t even account for the volume of ingrained misconception across the political spectrum on a wide range of issues boiled down to a few buzzwords. The folks at EFF are, in fact, expert at exploiting this phenomenon, at triggering Pavlovian responses to keyword conclusions on otherwise complex topics. Remember how the IP provisions in the TPP were going to chill speech on the internet? Is that claim any less absurd than the current administration’s rationale for pulling out of the most important trade deal in recent history? Stare at that Venn diagram for a while and try not to lose your mind.
My point is that we are already treading water in sea of externally and internally inflicted deceptions and obfuscations written by human beings. So, to the extent California’s bot “warning label” might diminish the amplification of all that noise, I think it’s a can’t hurt/might help proposition. At the same time, if, under very specific circumstances, this law could be invoked to chill someone’s speech, that’s for the court to address on a case-by-case basis.
The relatively narrow circumstances in which this law might be misapplied and also implicate speech does not make it “constitutionally flawed,” as the EFF claims. One can misapply a wide variety of laws we have right now to chill someone’s speech, which is why we appeal to courts to address such conflicts. Meanwhile, it seems reasonable to conclude that the intent to deceive, whether by bot or any other means, is rarely benign.
* The post cites three examples, but the third links to a dead URL.
Image by graphicwithart