Blender, the AI chatbot that's as likely to swear at you as give a civil answer
- Scraping the guts out of Reddit to formulate question and answer responses
- What could possibly go wrong?
- Can be personable and show empathy and but sometimes just can't be arsed
- About as human as a Norwegian Blue parrot
In 1953, when Alan Turing had been chemically castrated but not yet hounded by British authorities to the point of committing suicide, Arthur C Clarke the British science-fiction author wrote a story called "The Nine Billion Names of God". Not to be outdone, never to be outdone, in 2020 Facebook is underway with its gigantic chatbot, "Blender", which will have 9.6 billion parameters. That nice young Mr. Zuckerberg had better keep a close eye on things and make sure that Blender doesn't emulate the computer described in the short story. Nasty business that.
The initiative might better have been called "Scraper", as in 'the bottom of the barrel', because it's neural networks have so far scoured 1.5 billion comments from Reddit message boards and used data sets and algorithms to produce question and answer dialogues, some of which, it turns out, can be particularly inappropriate.
The Facebook AI Team claims that Blender has “the ability to assume a persona, discuss nearly any topic, and show empathy" and so, to users, “feels more human” than any earlier chatbot. It's a shame the same can't be said about someone who may actually believe he already owns the nine billionth name of God as part of his data collection.
In a blog posting, team leaders Stephen Roller, Jason Weston and Emily Dinan write, “Conversation is an art that we practice every day, when we’re debating food options, deciding the best movie to watch after dinner, or just discussing current events to broaden our worldview. For decades, AI researchers have been working on building an AI system that can converse as well as humans can: asking and answering a wide range of questions, displaying knowledge, and being empathetic, personable, engaging, serious, or fun, as circumstances dictate."
They go on, "So far, systems have excelled primarily at specialised, pre-programmed tasks, like booking a flight. But truly intelligent, human-level AI systems must effortlessly understand the broader context of the conversation and how specific topics relate to each other."
The ability to hold a conversation, with all its nuances, is a human attribute that takes many years to master. It can be just about apparent in a three-year-old but Blender's abilities are somewhere below that. It's probably at about the level of an irascible Nowegian Blue parrot.
Facebook claims Blender can maintain "some level of coherency during conversations and can ask and answer questions "appropriately"- so it might be regarded as already being in the superhuman bracket as far as some people are concerned. But it isn't.
Blender's apparent approximation to humanity falls apart after between 12 and 14 input-response sequences and then it reverts to repeating itself, ignoring questions, churning out gobbledygook, misinformation and downright lies or just swears at random. Researchers call this "hallucinating knowledge" which is something many of my acquaintances have acquired towards the end of a night in the pub.
The team has published Blender's open-source code, so things might improve - or get worse..
In the Arthur Clarke story, Tibetan monks in a monastery high in the Himalayas use a computer program to discover and list the nine billion names of God - and they are successful. On a crystal clear mountain night as the last name is printed out, "overhead, without any fuss, the stars were going out" as space, time and the universe ended. Facebook can't do that, surely? Hang on a second, Where's my tinderbox, flint and candle?
Stay up to date with the latest industry developments: sign up to receive TelecomTV's top news and videos plus exclusive subscriber-only content direct to your inbox – including our daily news briefing and weekly wrap.