A little over a year ago, Crew Interactive Mobile Companion (CIMON) , an Artificial Intelligence system created in collaboration between IBM (Watson, obvi), Airbus, and German Aerospace Center, began its first missions on the International Space Station (ISS).
The goal was to create a sort of super-charged Alexa, but better (which frankly, doesn’t seem like a stretch). Imagine an astronaut conducting experiments and being able to ask CIMON to look up related info, and use natural language processing (NLP) to do so. So our astronaut could just ask “what’s next?” and the AI would be able to know what he’s talking about, and give helpful answers. That by itself sounds pretty fricking awesome, but I wonder how accurate CIMON really is in daily life. IF CIMON really understands vague questions and commands like “what’s next?” and can figure out the context, that seems pretty advanced to me.
Anyway, the upgraded version (CIMON 2) launched yesterday. With it come a number of improvements, in particular an attempt to fuck with Emotional Intelligence because we absolutely need a machine to replace the one thing we humans are actually supposed to be good at.
According to TechCrunch, CIMON 2 uses IBM Watson Tone Analyzer to detect emotion in astronauts speech, and one example where this could come in handy is that (quote) “…could help evolve CIMON into a robotic countermeasure for something called “groupthink,” a phenomenon wherein a group of people who work closely together gradually have all their opinions migrate toward consensus or similarity. A CIMON with proper emotional intelligence could detect when this might be occurring, and react by either providing an objective, neutral view — or even potentially taking on a contrarian or “Devil’s advocate” perspective”.
That also sounds cool. If the AI can a) detect bias and b) make a suggestion on a complex topic that is not affected by emotion, then that’s honestly more advanced than what we’ve seen in the Artificial Intelligence field here on Earth.
But I doubt it. Why? Have you used Watson’s Tone Analyzer? It’s….not quite there yet. Okay, it’s honestly just not that good at actually detecting emotion, unless it can work with something obvious like angry screaming, curse words, and so on. Humans can pick up very subtle hostilities, biases, and sarcasm that Tone Analyzer currently misses. Watson’s Tone Analyzer still has trouble with that.
Further, TechCrunch’s article is quoting IBM’s Matthias Biniok making some very bold claims that really need to be backed up to the public to be believable. IBM has gotten good at the PR game when it realized it had a potential gold mine when Watson won Jeopardy and the Interwebs went bonkers and people started naming their kids Watson and AI (kidding. Sorta). But IBM is not above verifying claims it makes.
Right now, all we got is the word from Mr Biniok about CIMON’s capabilities. As of the time of this writing, there are no actual demonstrations of CIMON available online to the public, at least nothing that could fall under the category of “bold claim, bold proof”. In fact, the TechCrunch article offers only 1 person’s point of view and statements – that of Mr Biniok.
I love what Biniok/IBM claim CIMON can do. But can it? I’d like to hear from the astronauts who have to work with that thing.