Talking With Bots

Why Bots Should Be More Like Plants and Less Like People

Our bot ecosystems might be intelligent, but that doesn't mean we have to be able to talk with them

8 min read

A petal lying in grass, surrounded by emoji.
Image credit: Matt Locke

It’s a warm, sunny day in South London and I’m sitting in a suburban garden with digital storyteller Tim Wright. As he goes to make us a cup of tea, I let the ambient chatter of a summer afternoon drift over me–the babble of children playing in a nearby school, birds chirping to each other, traffic and trains trundling past, a couple on the street outside breaking into peals of laughter.

Despite all this audible chatter, I’m here to talk about a network of conversations that we can’t hear. The garden around us–blossoming fruit trees, thick borders, and fresh cut lawns–is also communicating, an ecosystem sharing information and competing for resources using a grammar and vocabulary that is completely alien to us. Wright thinks we can learn from the way plants talk to design better networks of bots–the intelligent agents that are being hyped as the way we’ll communicate with our tech ecosystems in the future. Instead of building bots like Apple’s Siri and Amazon’s Alexa in our likeness, he believes the answer might be to stop trying to make bots behave like humans altogether.


The science of plant communication has, to be honest, a controversial history. There are many decades of debates and arguments about what plant communication actually involves, and whether we can ascribe human qualities like intelligence or even consciousness to them. Clifford Slayman, a professor of cellular and molecular physiology at Yale, has described plant intelligence as “the last serious confrontation between the scientific community and the nuthouse.”

“So obviously I thought, “‘Great! Why wouldn’t you want to do something that was quite scientific, but also part of the nuthouse?'” Wright tells me as he explains how he became interested in the subject. “I started reading up a lot about where this came from,” he says. “In the 60s and 70s, there was a growth in hippie culture and green movements, and this brought the debate about giving plants rights to a larger audience. But there was also some pretty bogus science–connecting plants to lie detectors and then seeing if they cut the plant, would it ‘scream’?

“This culminated in the book The Secret Life of Plants. It was a big bestseller about the same time as Zen and the Art of Motorcycle Maintenance. It’s a similar product of its era, with these scientists–amateur scientists mainly, I have to say–claiming that not only would the plants react to being burned and cut, but they would then recognize the person who’d cut them and start sending off signals when that person entered the room.”

These experiments demonstrate the two elements that have made mainstream scientists critical of plant intelligence studies. The first is anthropomorphism–the tendency to project human-like qualities onto inanimate objects. This might be cute in internet memes, but it’s frowned upon in serious scientific research. The second is the pseudoscience of these early experiments–specifically those claims that others couldn’t replicate the experiments because of their environmental complexity or the expensively customized equipment used. ESP researchers and hoaxers in the same era used the same excuses to deflect criticism of their experiments. If “researchers” claim an experiment can’t be replicated, that’s usually a sign of pseudoscience.


More recently, however, the rise of cheap, high-quality sensing technology has led to a resurgence in the study of plant communication. This time it isn’t pseudoscientists but agricultural R&D labs that are leading the research–though the experiments are surprisingly similar. “A lot of this research is answering questions like, “‘What will plants do under stress?'” explains Wright. “So instead of going in and attacking a plant with scissors, it’s “‘what if I halve the amount of water you get? Will it make a difference if I just isolate you, or will you cooperate with your crop?'”

We now know that plants release volatile chemicals to ward off predators, and that this can trigger similar releases by nearby plants that aren’t actually under attack. Monica Gagliano, a professor in evolutionary ecology at the University of Western Australia, has taken this even further by showing that plants respond to acoustic–as well as chemical–signals. As it turns out, plants emit acoustic signals in a number of ways, from boughs moving in the wind to tiny emissions from individual cells. Gagliano is experimenting with plants encased in glass boxes separated by vacuums and playing back acoustic resonances to see if it affects plant behavior.

Unlike previous generations of pseudoscientists, Gagliano is not looking to see if plants like Mozart, but rather trying to understand what “listening” might actually mean for them. In a call for studies on acoustic communication in plants, she wrote: “The specific sensory mechanisms available in plants for detecting sound are still unclear, although they are likely to be an aspect of the multifaceted phenomenon of mechanosensing, the intrinsic ability to sense and respond to mechanical perturbations.” So plants might not “listen” so much as respond to vibrations at specific frequencies.

This is where current research differs from the anthropomorphic fantasies of The Secret Life of Plants. Researchers like Gagliano and Stefano Mancuso are not projecting human qualities onto plants, but instead trying to understand how they sense and react to messages in a register completely different to ours. Mancuso thinks plants could be even more sophisticated in their communication than animals. In his talk at TEDGlobal in 2010, he said, “Every single root apex is able to detect and to monitor concurrently and continuously at least 15 different chemical and physical parameters.” Humans and animals have to constantly refocus their attention, but it seems that plants manage thousands of parallel sensory inputs at the same time.


The fact that plants are continuously monitoring their environment is what makes them interesting to Wright, and intriguing as a model for building networks of bots. But how do we translate plant communication into something we can understand as humans? As a way of finding out, Wright bought secondhand vinicultural monitoring equipment and connected it to a damson tree in his garden. He used the data to try and develop a language for the tree that he could broadcast on social media.

Wright shows me how he connected the equipment to his tree, monitoring how much light each leaf gets in the canopy, measuring branch growth in terms of girth and length with a micrometer, and taking soil temperature half a meter and five meters down [1.6 feet and 16.4 feet, respectively]. He also fit a little sensor to the leaves on the tree, so you can look at its water cycle over 24 hours.

A man crouches in a garden by a tree.
Tim Wright and his connected damson tree. Image credit: Matt Locke

“I just put all the data on my own server on a website. I didn’t go public, because I’m still learning,” he says. “I thought, “‘I’m not ready to show my workings at this point, because I’ll look like a complete twit.’ It’s all theory. Also, it’s still me deciding what the correlation is between that data set there and what it means as a piece of content online.”

As he picks weeds around the damson tree, he tells me, “What I really liked in thinking about plant communication was the gibberish bit. Every time I tried to shift from gibberish to meaning, or a progression of a narrative, I, as the author, got involved and it was really obvious. So it was never believable that the plant was actually doing it. I wonder whether we just haven’t got to the uncanny valley of plants yet.”

The uncanny valley is the cultural shiver we feel when we see things that aren’t quite human but appear to behave as if they were. It’s the place that all robots and AI agents like Siri have to travel through if they want to be seen by users as friendly and intelligent. But if we designed bots to be like plants rather than humans, would there be an uncanny valley at all?

“There’s a fantastic little plant called the dodder plant. I went to one lecture with a scientist where he talked about it as the Count Dracula of plants,” says Wright. “Immediately you’re thinking, “‘You shouldn’t say that as a scientist. That’s not the right way,’ but it caught my attention. It particularly likes feeding off tomatoes. It’s just got a little tendril, and it comes up, and then it basically rotates around sniffing the air to see what’s around. If it smells tomato, then it grows towards the tomato. It then hooks on and digs into the tomato and feeds off it, basically takes all its nutrition from it. If you put a corn plant and a tomato plant in with it, it’ll always go for the tomato. If you take the tomato out, it’ll go for the corn anyway, because that’s the only thing there. So it’s making calculations all the time about “‘where’s the best thing for me to grow towards in order to get the nutrition I want?’ It knows the difference between a corn plant and a tomato plant.”

Watching this kind of behavior in plants, particularly in time-lapse videos, makes it easier for us to think of them as showing animal-like behavior. But if plants do show intelligence, it’s a networked intelligence rather than an individual one. Ninety percent of land plants use fungi as a network to connect to other plants in their immediate environment. They not only use these mycelium networks to link their roots with other plants, but also use them to share nutrients or release toxins to ward off competitors.

Wright thinks this kind of networked intelligence is closer to how we’re starting to construct our identities in digital networks. “I’m beginning to think that the issue of intelligence in plants is a bit like our sense of self,” he tells me. “Our sense of self is made up of lots of bits, isn’t it? We recognize that some bits are the same and some bits are different, but we understand there’s an overall system that means we can say, “‘That’s us.’ That’s much more explicit in your digital self than it is in your physical self. You can’t really express how much your liver is part of your sense of self. But you can measure how much of your digital self is your YouTube channel or your Twitter channel. You can stitch parts of your digital self together as a conglomerate idea of “‘that’s me, online.’ You can’t really do that with the in-body experience of yourself.”

This idea of a networked plant intelligence is another vision of how an Internet of Things could work when imagining your house as a network of Siri-like talking fridges and light fittings. Wright thinks the challenge, as with plants, is how to interact with this networked intelligence without anthropomorphism creeping in again. “When I’m creating collaborative story systems, do they always have to involve human beings? Why can’t the thermostat and my pot plant be key players in telling a story? It’s all very well observing how they’re stimulated by the real world, but if I gave them an online presence, how would they know they’ve got one and how would it change their behavior?”

Stefano Mancuso finished his TED talk with a similar call to imagine digital intelligence as plants rather than human:

“Let’s imagine that we can build robots that are inspired by plants. We have androids that are inspired by man. But why have we not got any plantoids? Well, if you want to fly, it’s good that you look at birds–to be inspired by birds. But if you want to explore soils, or if you want to colonize new territory, the best thing that you can do is to be inspired by plants that are the masters in doing this. It’s much easier to build hybrids. Hybrid means it’s something that’s half-living and half-machine. It’s much easier to work with plants than with animals. They have computing power. They have electrical signals. The connection with the machine is much easier, much more ethically possible.”


As we wrap up our chat, I lie down in Wright’s garden to take some pictures from a plant’s perspective. At this level, there’s no evidence of human presence at all, and it makes me think about how quickly plants would overtake our cities if we were no longer around. If we’re building a new ecosystem of bots talking to each other over the internet, what kind of ecosystem are we creating? An ecosystem of bots designed to communicate like humans would quickly become unbearably noisy. But an ecosystem of bots made to communicate like plants, using networked intelligence and broadcasting to each other in millions of parallel but unheard conversations, feels much more manageable.

I look at Wright as he tends to his garden, choosing what to prune and what to let grow, deciding the overall shape and design but unable to hear the millions of conversations that surround him. Maybe this is what our relationship with digital networks will look like in the future–more about pruning and weeding than coding and building. Less like architecture and more like gardening. Our bot ecosystems might be intelligent, but that doesn’t mean we have to be able to talk with them. We might just need to mow them every now and then.


How We Get To Next was a magazine that explored the future of science, technology, and culture from 2014 to 2019. This article is part of our Talking With Bots section, which asks: What does it mean now that our technology is now smart enough to hold a conversation? Click the logo to read more.