Questioning whether the technology exists yet for bots to forge meaningful relationships with humans–and, if not, what we should trust them with.
Science fiction is full of bots that hurt people. HAL 9000 kills one astronaut and tries to kill another in 2001: A Space Odyssey; Ava in Ex Machina expertly manipulates the humans she meets to try and escape her cell; the T-800 is known as The Terminator for obvious reasons. Even more common, though, are […]
The customer service chatbots offered by companies are simply a new way to gather user data.
While bots can now write fiction, they lack the depth and nuance of human storytellers–for now
People form intimate connections with even the simplest of machines–and as bots become more sophisticated, so will our relationships with them.
The next generation of lie detectors may be chatbots–but there are serious questions regarding the ethical framework underpinning them.
Deploying bots in a medical setting demands a whole new ethical framework.
“Here I am, inside a chat app, with no agency to ask the most basic questions about how the product works.”
The biggest problem for robots understanding dance is the same as for humans–notation.
Our bot ecosystems might be intelligent, but that doesn’t mean we have to be able to talk with them.
Could applying the Internet of Things to animals someday allow us to talk to them?
The earliest attempts at chatbots were designed to capture user attention on the early web.