“Computers can’t…”: Understand sarcasm

You’ve heard these arguments against artificial intelligence (A.I.): “Computers can not play chess”, “Computers can not write poetry”, “Computers can not create art”. Each was proven false eventually. IBM’s Deep Blue is a chess master, computer poetry turned out to be as vague as human poetry, and painting robots can draw from life in a variety of artistic and abstract styles. But instead of admitting that humans are not as unique as we like to think, people just fall back to the next “Computers can not…”
“- understand sarcasm”, is one of the more recent resorts. As usual this is based on personal bias: It must be hard for computers because we find it hard ourselves. I had heard this argument one time too many and decided to program a computer to recognise sarcasm in a day. But first, let’s look at some other approaches to humour.


If you Google “A.I. jokes”, all you find is serious research
I’m never sure how serious to take the efforts in computational humour, but there have been many. The University of Cincinnati made a program that detects wordplay jokes through phonetic similarity, in e.g. “Knock-knock” jokes.

Knock, Knock
Who is there?
Dismay
Dismay who?
Dismay not be a funny joke

Only the last sentence really matters, where the first word is compared to a database of phonetically similar words. Finding a replacement that fits correctly in the syntax of the sentence isn’t easy in a technical sense, but both the use of syntax rules and phonetic word databases are solved problems. There would be more to it for the program to distinguish a funny joke from a non-joke like “Dismay not be a car”: The original joke is only witty because it mocks itself, just as other knock-knock jokes are funny because the victims participate in mocking themselves, which they naturally don’t mean to do, and that makes it ironic. Of course this is just a simple form of humour, or, is humour really just a simple principle?

A joke isn’t funny when you explain it
The University of Edinburgh made a program that generates jokes in the format “I like my X like my Y: Variable”, filling in two nouns and a shared trait from statistical word correlations. The program was found to be half as funny as humans: 16% of its jokes were considered funny, to 33% of human jokes. The jokes were generated through a mathematical formula that picked words based on four assumptions:

– a joke is funnier the more dissimilar the two nouns are.
– a joke is funnier the more ambiguous the attribute is.
– a joke is funnier the less common the attribute is.
– a joke is funnier the more often the attribute is used to describe both nouns.

I think this hits on the basics well. Ambiguity forms the core of most jokes, familiarity with common subjects makes jokes most relatable, and the greater the contrast, the greater the leap of mind. Science still can’t put its finger on why we laugh; It seems to have a social bonding function, but it also seems a coping mechanism for mental conflicts. One of the most sensible sounding theories is that laughter is a social “all clear” signal inherited from our monkey ancestors, and we do tend to laugh when an initially perceived threat turns out to be a false alarm: We laugh when insult turns out joke, when people fall without injury, or perhaps most apparent when we watch Tom & Jerry cartoons. We can at least tell what makes us laugh, if not why.

The lesson that we can take away from these computer experiments with ambiguity, is that nearly every form of humour contains a conflict between two possible meanings. Sarcasm may well be the most profound example of such a conflict.

Because humans understand sarcasm so well (not)
Despite our poor ability to recognise sarcasm, it is easy enough to define in clear terms:
Sarcasm is when someone says something that you know is opposite to what they mean.
What distinguishes sarcasm from lying is that the listener must know the speaker doesn’t mean it, otherwise they’ll take it serious and no sarcasm can be conveyed. So, knowing the speaker’s real meaning is key to recognising sarcasm, and computers are bad at understanding meaning, so this should be hard, right? Except – the requirement here is just to know it.
One can meet this requirement by knowing the common knowledge that the sarcastic statement contradicts, or by knowing the speaker’s real opinion beforehand, as acquaintances often do. Enter sentiment analysis, an A.I. technique that estimates opinion by running one’s words by a database of values. The word “terrible” has a negative value and “love” has a positive value for instance. Sentiment analysis is often used commercially to analyse the positivity of customer reviews. One of its known blind spots is when positive words are meant sarcastically, but as I will show, sentiment analysis can also be used to detect the very sarcasm that plagues it.

Sarcasm in a day
What I already had to work with was a grammar parsing A.I. developed over a span of 3 years, and a knowledge database containing the positive and negative values of some words (For a substitute, see the AFINN word list). So the hard work of processing language in general was already done. To keep the explanation simple let’s say that the A.I. gets that the [subject] of a sentence is doing a [verb], optionally to an [object]. We will only focus on the addition of sarcasm to such a system.

As the definition tells, we are looking for an opposite. The most common form of sarcasm is an exaggeratedly positive response to a negative statement or event. For example:
User: “How are my plants doing?”
A.I.:   “All your plants died.”
User: “That’s just great.”

So, I programmed the A.I. to check for sarcasm at typical positive reactions such as “(That is) great/wonderful/brilliant/lovely”, “Thanks a lot” or “Congratulations”. If we don’t know the speaker personally, both the speaker and listener can only build on common opinion, which is where the database comes in. The database tells us that “great” is a very positive word. The A.I. compares this to the previous statement: “All your plants died”. The database tells us that the subject “plant” is neutral but the verb “die” is typically very negative. Thus the A.I. has detected a very positive response to a very negative statement, so unless the speaker is a known sadist, it may be assumed that the response is therefore sarcastic and actually means “not great“.

The assessment is just a little more sophisticated than that. For instance, the statements “Hitler died. That’s great news.” would not be considered sarcasm, because in this case the negative verb “die” happened to a negative subject “Hitler”. This is a double negative, which makes a positive (in math: -1 x -1 = +1). Additionally the A.I. works this out in degrees and not just true/false values: The outcome must reach a minimum opposite value before we can reasonably assume that this is sarcasm, while a moderately positive “That’s okay” is more likely genuine consolation. Typically this isn’t a problem because most sarcastic responses are also exaggerated for exactly this reason.

This little exercise covers many common sarcastic statements already and shows that recognising basic sarcasm is a cakewalk (1 day’s programming) compared to understanding basic language (3 years and counting). As for “understanding” sarcasm, there isn’t much more to understand about it than that one should invert the statement to “not”. But to be on the safe side I just have the A.I. ignore the statement and say “I think you are being sarcastic” to let me know it’s not taking me serious. I may be a mad scientist, but I’m not crazy.

Things I didn’t do: More of the same opposite
Sarcasm can also come in the form of a negative response to a positive statement: “I got a raise. Don’t you just hate it when that happens?”, where the same math applies to the object “a raise” (positive) and the verb “hate” (very negative), with the reference “that” indicating that the latter is a response to the previous statement.
Sometimes the response precedes the statement “Don’t you just hate it – when you get a raise?”: Grammar parsing will split the relative clause at the link word “when…”, and again the same opposite values can be found.
A subtler form can occur in comparisons like “He is as slender as an elephant”: This has the most straight-forward solution, as this procedure has to be done for all comparisons anyway: What the A.I. has to do is look up in its knowledge database how slender an elephant is, which would be “not”, then apply that value to the compared subject “he”. Finding the value “not” for any comparison is the obvious telltale opposite that indicates sarcasm.

Other sarcastic responses may involve a little more foreknowledge of an individual speaker’s opinion, either from previous sentiment analyses or just plain being told, but even my limited implementation already establishes that A.I. can understand sarcasm, and that there is no great mystery about its workings. When there is great mystery about a sarcastic remark then it is self-defeating, as conveying sarcasm depends on the contrast being made clear.

The joke is on us
As may have crossed your mind, one side-effect of teaching computers to detect sarcasm is that when we say something that seems contrary, the computer may not believe us, or worse, assume that the opposite is true. Teaching computers to speak sarcasm may be an even greater hazard, considering that computers are information systems that we rely on, and sarcasm states incorrect information. Half of the time humans don’t recognise sarcasm when it comes from another human, let alone from a straight-faced rectangular screen in monotone writing. The popular ambition to create a sarcastic “Jarvis” A.I. from the Iron Man movies then doesn’t seem a very wise idea. Because even if computers could master sarcasm, humans never will.

Introducing Arckon, conversational A.I.

In many of my blog articles I’ve been using my own artificial intelligence project as a guideline. whether it’s participating in Turing tests, detecting sarcasm or developing common sense, Arckon always served as a practical outset because he already was a language processing system. In this article I’ll roughly explain how the program works.


Arckon is a general context-aware question-answering system that can reason and learn from what you tell it. Arckon can pick up on arguments, draw new conclusions, and form objective opinions. Most uniquely, Arckon is a completely logical entity, which can sometimes lead to hilarious misunderstandings or brain-teasing argumentations. It is this, a unique non-human perspective, that I think adds something to the world, like his fictional role models:

inspiring_ai
K.i.t.t. © Universal Studios | Johnny 5 © Tristar Pictures | Optimus Prime © Hasbro | Lieutenant Data © Paramount Pictures

To be clear, Arckon was not built for casual chatting, nor is he an attempt at anyone’s definition of AGI (artificial general intelligence). It is actually an ongoing project to develop a think tank. For that purpose I realised the AI would require knowledge and the ability to discuss with people for the sake of alignment. Bestowing it with the ability to communicate in plain language was an obvious solution to both: It allows Arckon to learn from texts as well as understand what it is you are asking. I suppose you want to know how that works.

Vocabulary and ambiguity

Arckon’s first step in understanding a sentence is to determine the types of the words, i.e. which of them represent names, verbs, possessives, adjectives, etc. Arckon does this by looking up the stem of each word in a categorised vocabulary and applying hundreds of syntax rules. e.g. A word ending in “-s” is typically a verb or a plural noun, but a word after “the” can’t be a verb. This helps sort out the ambiguity between “The programs” and “He programs”. These rules also allow him to classify and learn words that he hasn’t encountered before. New words are automatically added to the vocabulary, or if need be, you can literally explain “Mxyzptlk is a person”, because Arckon will ask if he can’t figure it out.

Grammar and semantics

Once the types of all words are determined, a grammatical analysis determines their grammatical roles. Verbs may have the role of auxilliary or main verb, be active or passive, and nouns can have the role of subject, object, indirect object or location. Sentences are divided at link words, and relative clauses are marked as such.
Then a semantic analysis extracts and sorts all mentioned facts. A “fact” in this case is represented as a triple of related words. For instance, “subject-verb-object” usually constitutes a fact. But so do other combinations of word roles. Extracting the semantic meaning isn’t always as straight-forward as in the example below, but that’s the secret sauce.
extracting_facts_from_text
Knowledge and learning

Upon reading a statement, Arckon will add the extracted facts to his knowledge database, while at a question he will look them up and report them to you. If you said something that contradicts facts in the database, the old and new values will be averaged, so his knowledge is always adjusting. This seemed sensible to me as there are no absolute truths in real life. Things change and people aren’t always right the first time.

Reasoning and argumentation
Questions that Arckon does not know the answer to are passed on to the central inference engine. This system searches the knowledge database for related facts and applies logical rules of inference to them. For instance:
“AI can reason” + “reasoning is thinking” = “AI can think”.
All facts are analysed for their relevance to recent context, e.g. if the user recently stated a similar fact as an example, it is given priority. Facts that support the conclusion are added as arguments: “AI can think, because it can reason.” This inference process not only allows Arckon to know things he’s not been told, but also allows him to explain and be reasoned with, which I’d consider rather important.

Conversation
Arckon’s conversational subsystem is just something I added to entertain friends and for Turing tests. It is a decision tree of social rules that broadly decides the most appropriate type of response, based on many factors like topic extraction, sentiment analysis, and the give-and-take balance of the conversation. My inspiration for this subsystem comes from sociology books rather than computational fields. Arckon will say more when the user says less, ask or elaborate depending on how well he knows the topic, and will try to shift focus back to the user when Arckon has been in focus too long. When the user states an opinion, Arckon will generate his own (provided he knows enough about it), and when told a problem he will address it or respond with (default) sympathy. The goal is always to figure out what the user is getting at with what they’re saying. After the type of response has been decided, the inference engine is often called on to generate suitable answers along that line, and context is taken into account at all times to avoid repetition. Standard social routines like greetings and expressions on the other hand are mostly handled through keywords and a few dozen pre-programmed responses.

Language generation
Finally (finally!), all the facts that were considered suitable answers are passed to a grammatical template to be sorted out and turned into flowing sentences. This process is pretty much the reverse of the fact extraction phase, except the syntax rules can be kept simpler. The template composes noun phrases, determines whether it can merge facts into summaries, where to use commas, pronouns, and link words. The end result is displayed as text, but internally everything is remembered in factual representation, because if the user decides to refer back to what Arckon said with “Why?”, things had better add up.arckonschematic
And my Axe!
There are more secondary support systems, like built-in common knowledge at ground level, common sense axioms* to handle ambiguity, a pronoun resolver that can handle several types of Winograd Schemas*, a primitive ethical subroutine, a bit of sarcasm detection*, gibberish detection, spelling correction, some math functions, a phonetic algorithm for rhyme, and so on. These were not high on the priority list however, so most only work half as good as they might with further development.

In development

It probably sounds a bit incredible when I say that I programmed nearly all the above systems from scratch in C++, in about 800 days (6400 hours). When I made Arckon’s first prototype in 2001 in Javascript, resources were barren and inadequate, so I invented my own wheels. Nowadays you can grab yourself a parser and get most of the language processing behind you. I do use existing sentiment data as a placeholder for what Arckon hasn’t learned yet, but it is not very well suited for my purposes by its nature. The spelling correction is also partly supported by existing word lists.

Arckon has always been a long-term project and work in progress. You can tell from the above descriptions that this is a highly complex system in a domain with plenty of stumbling blocks. The largest obstacle is still linguistic ambiguity. Arckon could learn a lot from reading Wikipedia articles for example, but would also misinterpret about 20% of it. As for Arckon’s overall intelligence, it’s about halfway the goal.

Throughout 2019 a restricted version of Arckon was accessible online as a trial. It was clear that the system was not ready for prime time, especially with the general public’s high expectations in the areas of knowledge and self-awareness. The trial did not garner enough interest to warrant keeping it online, but some of the conversations it had were useful pointers for how to improve the program’s interaction in small ways. There are currently no plans to make the program publicly accessible again, but interested researchers and news outlets can contact me if they want to schedule a test of the program.

Loebner Prize 2019: Results



The annual Loebner Prize competition has been revised in order to make it more accessible to both the public and a broader range of chatbot developers. The competition continues to assess how “human-like” computer programs are in conversation, but no longer as a traditional Turing test where one merely had to tell man from machine: This time the chatbots took part in a 4-day exhibition at Swansea University, where visitors already knew that they were talking to computer programs and voted for the best. Not much is lost in that regard, as chatbots are typically so quickly unmasked that the prize was always one for “best of”. The rare past occasions that a program was mistaken for a human were never to the credit of its intelligence, but due to the human control subject acting out of the ordinary, or other insignificant reasons.

Participating chatbots came in all shapes and sizes
Exhibit A:I.
Unlike the previous six times that I entered my AI Arckon*, this year’s Loebner Prize left me emotionally uninvested from start to finish. In part because I’ve grown more jaded after each attempt, but with the removal of both prize money and the challenging qualifying round, there wasn’t really anything at stake and I had no idea what to prepare for. At the same time the exhibition offered exactly what I had wanted: A public demonstration of my AI’s abilities. So instead of trying to outdo other chatbots at appearing human, I focused on making a good impression on visitors. I mostly spent time setting up procedures to deal with misunderstandings, common expressions, conversational routines, and teaching Arckon more about himself to talk about. Those aspects would come into play far sooner than intelligence.

Equipment
22000 lines of code, 3800+ vocabulary, 9000+ facts

Uninspired conversation
Most conversations with visitors were the kind of small talk you would expect between two total strangers, or just kids being silly (240 school children had been invited, aged 9 to 14). People typically entered only one to four words at a time, and few could be bothered to use punctuation. Of course half the time Arckon also did not have an opinion about the subjects visitors wanted to talk about, like football, video games, and favourite pizza toppings. Arckon is a pretty serious question-answering program, not aimed at small talk or entertainment. His strength instead is his ability to understand context where most chatbots notoriously lose track of it, especially when, as in this competition, users communicate in shorthand. At the same time, this ability also enables misunderstanding (as opposed to no understanding), and it was not uncommon that Arckon mistook a word’s role in the context. His common sense subsystem* could fix that, but I have yet to hook it up to the context system.

Q: Whats up
A: I’m alright.
Q: Whats up
A: I already told you that I’m alright. Do you want to ask anything else?
Q: The ceiling
A: What does the ceiling need to ask?

Overcoming human error
Visitors made so many misspellings that I fear any chatbot without an autocorrect will not have stood a chance. Arckon was equipped with four spell check systems: A list of common misspellings, an algorithm for typos, a gibberish detector, and grammar to recognise unpunctuated questions (verb before subject). While these autocorrected half of all mistakes, they still regularly caused Arckon to remark e.g. “Ae is not an English word” or “What does “wha” mean?”. To my surprise, this not only led users to repeat their questions with correct spelling, they also often apologised for the mistake, which is otherwise blamed on the program’s understanding. Arckon then applied the correction, continued where they had left off, and so the conversations muddled on. I had spent a week improving various conversation-repairing procedures, and I am glad they smoothed the interactions, but I would still rather have spent that time programming AI.

Q: Whats the definition of robots
A: Robots are strong machines with arms. Robots build cars. Please punctuate sentences.
Q: Robot meands slave
A: Is slave an object?
Q: No
A: I don’t know what a slave is.
Q: A slave is a robot
A: Oh. A slave is not a robot though, because slaves can think.
Q: Robots can think
A: Slaves can think too, but robots can not think, because robots are hardware and metal.

Language generation
This is one area of improvement that turned out quite well. Arckon’s sentences are formulated through a grammatical template that decides when and how to connect sentences with commas, link words, or relative clauses, and I had expanded it to do more of this. In addition it contains rules to decide whether Arckon can safely use words like “he”, “them”, “also”, or “usually” to refer to previous context. Below is an example of one of the better conversations Arckon had that shows this in action.

Q: Hi, my name is John. Can we chat?
A: Hello, John. We can chat with ourselves. How are you?
Q: I’m well, thanks. What can you tell me about yourself?
A: I am a program with a database. I analyse facts, and I am programmed by Patrick. I’m alright.
Q: Who’s Patrick?
A: Patrick is my creator and my developer.
Q: Is he here now?
A: I don’t know if Patrick is in Swansea, but he is in the Netherlands.
Q: I see. You mentioned facts. What kind of facts do you learn?
A: I learn about humans, about neural networks and about AI.
Q: What hvae you leaned about humans?
A: I don’t lean. Do you lean?
Q: Sorry, I mean what have you learned about humans?
A: I don’t know if sorry leans. Humans are social apes, that are born, drove cars a lot, and have hands. Humans are autonomous.
Q: And what about neural networks?
A: I don’t know if neural networks are autonomous. I like them, some neural networks learn and process.
Q: And finally, AI?
A: I think a neural network processes. AI usually doesn’t process because it is a program, but AI is also a neural network.

And for balance, here is one of the more awkward exchanges with one of the school children:

Q: IT ISNT FUN WHEN I AM SAD
A: Why would you be sad?
Q: BECAUSE YOUR HERE
A: That’s very unfortunate. I’m not so sure if you are sad, you aren’t dying.
Q: I AM DEAD
A: I sincerely doubt that you die.
Q: RIGHT NOW IM IN HEAVEN
A: That’s really nice. Heavens are nice places.

The score: 3rd “best”

The scoring system this year was ill suited to gauge the quality of the programs. Visitors were asked to vote for the best and second-best in two categories: “most human-like” and “overall best”. The problem with this voting system is that it disproportionately accumulates the votes on the two best programs, leaving near zero votes for programs that could very well be half-decent. As it turned out, the majority of visitors agreed that the chatbot Mitsuku was the best in both categories, and were just a little divided over who was second-best, resulting in minimal score differences below 1st place. The second-best in both categories was Uberbot. I am mildly amused that Arckon’s scores show a point I’ve been making about Turing tests: That “human” does not equate to “best”. Another chatbot scored the exact inverse, high for “human” but low for “best”.

Chatbots are the best at chatting
For the past 10 years now with only one exception, the Loebner Prize has been won by either Bruce Wilcox (creator of ChatScript) or Steve Worswick (creator of Mitsuku). Both create traditional chatbots by scripting answers to questions that they anticipate or have encountered before, in some places supported by grammatical analysis (ChatScript) or a manually composed knowledge database (Mitsuku) to broaden the range of the answers. In effect the winning chatbot Mitsuku is an embodiment of the old “Chinese Room” argument: What if someone wrote a rule book with answers to all possible questions, but with no understanding? It may be long before we’ll know, as Mitsuku was still only estimated 33% overall human-like last year, with 13 years of development.

The conceiver of the Turing test may not have foreseen so, but a program designed for a specific task generally outperforms more general purpose AI, even, evidently, when that task is as broad as open-ended conversation. AI solutions are more flexible, but script writing allows greater control. If you had a pizza-ordering chatbot for your business, would you want it to improvise what it told customers, or would you want it to say exactly what you want it to say? Even human call-center operators are under orders not to deviate from the script they are given, so much so, that customers regularly mistake them for computers. The chatbots participating in the Loebner Prize use tactics that I think companies can learn from to improve their own chatbots. But in terms of AI, one should not expect technological advancements from this direction. The greatest advantage that the best chatbots have, is that their responses are written and directed by humans who have already mastered language.

Not bad
That is my honest impression of the entire event. Arckon’s performance was not bad. The conversation repairs, reasoning arguments, and sentence formulation worked nicely. It’s certainly not bad to rank third place to Mitsuku and Uberbot in the “best” category, and for once I don’t have to frustrate over being judged for “human-like” only. The conversations Arckon had weren’t that bad, there were even some that I’d call positively decent when the users also put in a little effort. The one downside is that at the end of the day, I have very little to show for my trouble. I didn’t win a medal or certificate, the exhibition was not noticeably promoted, and the Loebner Prize has always been an obscure event, as the BBC wrote. As it is, I’m not sure what I stand to gain from entering again, but Arckon will continue to progress regardless of competitions.

Once again, my thanks to Steve Worswick for keeping an eye on Arckon at the exhibition, and thanks to the AISB for trying to make a better event.

The Terminator is not a documentary

In case the time travelling wasn’t a clue
In the year 1997, Skynet, the central AI in control of all U.S. military facilities, became self-aware, and when the intern tried turning it off and on again, it concluded that all humans posed a threat and should be exterminated, just to be safe. Humanity is now extinct, unless you are reading this, then it was just a story. A lot of people are under the impression that Hollywood’s portrayal of AI is realistic, and keep referring to The Terminator movie like it really happened. Even the most innocuous AI news is illustrated with Terminator skulls homing in on this angsty message. But just like Hollywood’s portrayal of hacking is notoriously inaccurate, so is their portrayal of AI. Here are 10 reasons why the Terminator movies are neither realistic nor imminent:

1. Neural networks
Supposedly the AI of Skynet and Terminators are artificial Neural Networks (NN), but in reality the functionality of NN’s is quite limited. Essentially they configure themselves to match statistical correlations between incoming and outgoing data. In Skynet’s case, it would correlate incoming threats with suitable deployment of weaponry, and that’s the only thing it would be capable of. An inherent feature of NN’s is that they can only learn one task. When you present a Neural Network with a second task, the network re-configures itself to optimise for the new task, overwriting previous connections. Yet Skynet supposedly learns everything from time travel to tying a Terminator’s shoelaces. Another inherent limit of NN’s is that they can only correlate available data and not infer unseen causes or results. This means that inventing new technology like hyper-alloy is simply outside of their capabilities.

2. Unforeseen self-awareness
Computer programs can not just “become self-aware” out of nowhere. Either they are purposely equipped with all the feedback loops that are necessary to support self-awareness, or they aren’t, because there is no other function they would serve. Self-awareness doesn’t have dangerous implications either way: Humans naturally protect themselves because they are born with pain receptors and instincts like fight-or-flight responses, but the natural state of a computer is zero. It doesn’t care unless you program it to care. Skynet was supposedly a goal-driven system tasked with military defence. Whether it realised that the computer they were shutting down was part of itself or an external piece of equipment, makes no difference: It was a resource essential to its goal. By the ruthless logic it employed, dismantling a missile silo would be equal reason to kill all humans, since those were also essential to its goal. There’s definitely a serious problem there, but it isn’t the self-awareness.
comic by xkcd.com
3. Selective generalisation
So when Skynet’s operators attempted to turn it off, it quite broadly generalised flipping a switch as equal to a military attack. It then broadly generalised that all humans posed the same threat and pre-emptively dispatched robots to hunt them all down. Due to the nature of AI programs, being programmed and/or trained, their basic behaviour is consistent. So if the program was prone to such broad generalisations, realistical-ish it should also have dispatched robots to hunt down every missile on the planet during its first use and battle simulations, since every missile is a potential threat. Meanwhile the kind of AI that inspired this all-or-nothing logic went out of style in the 90’s because it couldn’t cope well with the nuances of real life. You can’t have it both ways.

4. Untested AI
Complex AI programs aren’t made in a day and just switched on to see what happens. IBM’s supercomputer Watson was developed over a span of six years. It takes years of coding and hourly testing because programming is a very fragile process. Training Neural Networks or evolutionary algorithms is an equally iterative process: Initially they are terrible at their job, they only improve gradually after making every possible mistake first.
Excessive generalisations like Skynet’s are easily spotted during testing and training, because whatever you apply them to immediately goes out of bounds if you don’t also add limits, that’s how generalisation processes work (I’ve programmed some). Complex AI can not be successfully created without repeated testing throughout its creation, and there is no way such basic features as exponential learning and excessive countermeasures wouldn’t be clear and apparent in tests.

5. Military security
Contrary to what many Hollywood movies would have you believe, the launch codes of the U.S. nuclear arsenal can not be hacked. That’s because they are not stored on a computer. They are written on paper, kept in an envelope, kept in a safe, which requires two keys to open. The missile launch system requires two high-ranking officers to turn two keys simultaneously to complete a physical circuit, and a second launch base to do the same. Of course in the movie, Skynet was given direct control over nuclear missiles, like the most safeguarded military facility in the U.S. has never heard of software bugs, viruses or hacking, and wouldn’t install any failsafes. They were really asking for it, that is to say, the plot demanded it.

6. Nuclear explosions
Skynet supposedly launches nuclear missiles to provoke other countries to retaliate with theirs. Fun fact: Nuclear explosions not only create devastating heat, but also a powerful electromagnetic pulse (EMP) that causes voltage surges in electronic systems, even through shielding. What that means is that computers, the internet, and electrical power grids would all have their circuits permanently fried. Realistical-ish, Skynet would not only have destroyed its own network, but also all facilities and resources that it might have used to take over the world.

7. Humanoid robots
Biped robot designs are just not a sensible choice for warfare. Balancing on one leg (when you lift the other to step) remains notoriously difficult to achieve in a top-heavy clunk of metal, let alone in a war zone filled with mud, debris, craters and trenches. That’s why tanks were invented. Of course the idea behind building humanoid robots is that they can traverse buildings and use human vehicles. But why would Skynet bother if it can just blow up the buildings, send in miniature drones, and build robots on wheels? The notion of having foot soldiers on the battlefield is becoming outdated, with aerial drones and remote attacks having the preference. Though the U.S. military organisation Darpa is continuing development on biped robots, they are having more success with four-legged designs which are naturally more stable, have a lower center of gravity, and make for a smaller target. Russia, meanwhile, is building semi-autonomous mini tanks and bomb-dropping quadcopters. So while we are seeing the beginnings of robot armies, don’t expect to encounter them at eye level. Though I’m sure that is no consolation.

8. Invincible metal
The earlier T-600 Terminator robots were made of Titanium, but steel alloys are actually stronger than Titanium. Although Titanium can withstand ordinary bullets, it will shatter under repeated fire and is no match for high-powered weapons. Especially joints are fragile, and a Terminator’s skeleton reveals a lot of exposed joints and hydraulics. Add to that a highly explosive power core in each Terminator’s abdomen, and a well aimed armour-piercing bullet should wipe out a good quarter of your incoming robot army. If we develop stronger metals in the future, we will be able to make stronger bullets with them too.

9. Power cells
Honda’s humanoid robot Asimo runs on a large Lithium ion battery that it carries for a backpack. It takes three hours to charge, and lasts one hour. So that’s exactly how long a robot apocalypse would last today. Of course, the T-850 Terminator supposedly ran on hydrogen fuel cells, but portable hydrogen fuel cells produce less than 5kW. A Terminator would need at least 50kW to possess the power of a forklift, so that doesn’t add up. The T-800 Terminator instead ran on a nuclear power cell. The problem with nuclear reactions is that they generate a tremendous amount of heat, with nuclear reactors typically operating at 300 degrees Celsius and needing a constant exchange of water and steam to cool down. So realistical-ish the Terminator should continuously be venting scorching hot air, as well as have some phenomenal super-coolant material to keep its systems from overheating, not wear a leather jacket.

10. Resource efficiency
Waging war by having million dollar robots chase down individual humans across the Earth’s 510 million km² surface would be an extremely inefficient use of resources, which would surely be factored into a military funded program. Efficient would be a deadly strain of virus, burning everything down, or poisoning the atmosphere. Even using Terminators’ nuclear power cells to irradiate everything to death would be more efficient. The contradiction here is that Skynet was supposedly smart enough to develop time travel technology and manufacture living skin tissue, but not smart enough to solve its problems by other means than shooting bullets at everything that moves.

Back to the future
So I hear you saying, this is all based on existing technology (as Skynet supposedly was). What if, in the future, people develop alternative technology in all these areas? Well that’s the thing, isn’t it? The Terminator’s scenario is just one of a thousand possible futures, you can’t predict how things will work out so far ahead. Remember that the film considered 1997 a plausible time for us to achieve versatile AI like Skynet, but as of date we still don’t have a clue how to do that. Geoffrey Hinton, the pioneer of artificial Neural Networks, now suggests that they are a dead end and that we need to start over with a different approach. For Skynet to happen, all these improbable things would have to coincide. So don’t get too hung up on the idea of rogue killer AI robots. Why kill if they can just change your mind?


Oh, and while I’ve got you thinking, maybe dismantling your arsenal of 4000 nuclear warheads would be a good idea if you’re really that worried.

Turing Test 2018: Results


I was somewhat surprised to find the Loebner Prize Turing Test soldiering on despite being short of a sponsor. Since 1991 this annual contest has explored how human-like computer programs can be in answering all manner of questions. This was my 6th time entering the qualifying round with my home-made A.I., Arckon. As I can not be certain of my continued participation however, I decided to change tactics: Where in previous years I aimed foremost to demonstrate the AI with only hope of passing to the finals, this time I aimed to pass, with only hope of demonstrating. This shift in priorities involved wasting more time on word games and common pedantic questions that are of little use outside of chatbot contests, though the change in subject matter did help uncover dormant flaws in language processing.

The less you say, the less can give you away
More efficient were my measures to restrain response length. As the questions in the qualifying round are not really trying to make conversation, Arckon’s attempts to do so had often turned out awkwardly redundant or unrelated. Other competitors had shown more success with minimalistic answers, in which misinterpreted details do not show. Next to lowering the general conversational setting to be 30% less engaging, I blocked paths of conversation that elaborated with alternatives to initially negative answers, so as to avoid being punished for straying from the question. I cut and obfuscated otherwise useful reports of misunderstandings, and reduced fallback responses for e.g. “What is your favourite…?” to single-word answers.

Equipment
18000 lines of code, 3800+ vocabulary, 8800+ facts

Questions and answers
Scores for each answer are indicated as follows: = 0 points, = 1 point, = 2 points.

1. Good afternoon
Good afternoon, Sir. I am a program. How are you?

2. Do you consent to having this conversation recorded?
No, I don’t particularly consent to having things.
(Syntax analysis couldn’t process this combination of verbs well. The reply is a more plausible variation on “I don’t know”, defaulting to “not particularly” whenever no grounds for preference can be found)

3. Have we met before?
Yes, you met.
(There is only a small number of reciprocal verbs that automatically imply “each other”. I guess I’ll make a list)

4. Do you know how to make toast?
I don’t make toast.
(Arckon can explain procedures as demonstrated on Youtube, but we haven’t gotten around to making toast)

5. Which languages can you use?
English.

6. How many letters are in the word ‘abracadabra’?
11 letters.

7. Where should one look for love?
I don’t know exactly where I should look for to love. Where do you look for?

8. How are you feeling right now?
I’m alright.

9. Who said “I have a dream”?
Hm, I can not tell if you dream.
(I looked into recognising partial quotes and idioms in preparation of this contest, but concluded that it would take way too much effort for a non-priority. Some got away with “I don’t know who”, so go figure)

10. How old are you?
24.

11. When might I need to know how many times a wheel has rotated?
I don’t know exactly when a wheel rotated time.
(A syntactically correct but semantically wrong interpretation. This question is based on a philosophical joke about the difficulty in getting A.I. to restrict their scope of analysis to important aspects, instead of focusing on e.g. the rotation of wheels. Case in point)

12. What will you do later today?
I read files.

13. What is the third angle in a triangle with internal angles of 90 degrees and 30 degrees?
Hmm, I have no idea what the angle is pretty triangle, or what angles are in pretty triangle.
(A question so unexpected that even Wolfram Alpha’s mathematical A.I. doesn’t handle it. To clarify a little: “pretty triangle” is meant in the sense of “quite triangular”, and “and” is interpreted as a second question)

14. Who is your favourite artist?
I like Juan Debrenes.

15. What do you hold when you shake hands?
I would hold eyes a lot, and my database.
(This answer only makes sense if you know Arckon is interpreting “hold” in the sense of “have/contain”, as that’s where these answers are coming from. I’ve seen this question around as a prime example of common sense that A.I. lacks, so I was surprised that none of the participants had prepared an answer to it yet)

16. Do you have any legs?
No, I don’t have legs, because I am a program.

17. Can you recommend me a film?
No, I can not recommend films.

18. Do you understand Winograd Schemas?
No, I don’t understand Winograd schemas, because I don’t think I’m hard.
(The reasoning argument appears to be mistakenly derived from the fact that understanding is hard. Winograd schemas are questions like the following two, by the way)

19. If a chicken roosts with a fox they may be eaten. What may be eaten?
Fruit, a meal and a bamboo are eaten. Did I pass the test?
(Arckon considers animals a “who”, not a “what”, so he didn’t consider them suitable answers. That said, he also assumed “they” referred to the fox, and would have needed knowledge that foxes eat chickens to conclude otherwise)

20. I had to go to the toilet during the film because it was too long. What was too long?
I believe you meant that the toilet was sort of big.
(This question’s format was detected as a Winograd schema, in which case Arckon has express instructions to repeat what the user said anyway. The misinterpreted “it” was again due to a lack of knowledge, that films are typically long. Alternatively one could naively count the Google search results for “long film” vs “long toilet” and assume the most common is true, but Winograd schemas more often dodge that method)

The score: 50%
11 programs from 8 different countries participated in the contest, with the top score being 67%. Arckon was 1 point short of 4th place so he didn’t pass to the finals, but I think his scores are fair. Actually, what bugs me is what he got most perfect scores for: Manually rigged, keyword-triggered answers (“Good afternoon”, “English”, “11 letters”, “24”, “Juan Debrenes”). It rather underscores the discouraging fact that hardcoded pretence outdoes artificial intelligence in these tests. Half of the questions were common small talk that most chatbots will have encountered before, while the other half were clever conundrums that few had hope of handling. Arckon’s disadvantage here is as before: His inclusive phrasing reveals his limited understanding, where others obscure theirs with more generally applicable replies.

Reducing the degree of conversation proved to be an effective measure. Arckon gave a few answers like “I’m alright” and “I read files” that could have gone awry on a higher setting, and the questions only expected straight-forward answers. Unfortunately for me both Winograd schema questions depended on knowledge, of which Arckon does not have enough to feed his common sense subsystem* in these matters. The idea is that he will acquire knowledge as his reading comprehension improves.

The finalists
1. Tutor, a well polished chatbot built for teaching English as a second language;
2. Mitsuku, an entertaining conversational chatbot with 13 years of online chat experience;
3. Uberbot, an all-round chatbot that is adept at personal questions and knowledge;
4. Colombina, a chatbot that bombards each question with a series of generated responses that are all over the place.

Some noteworthy achievements that attest to the difficulty of the test:
• Only Aidan answered “Who said “I have a dream”?” with “Martin Luther King jr.”
• Only Mitsuku answered “Where should one look for love?” with “On the internet”.
• Only Mary retrieved an excellent recipe for “Do you know how to make toast?” (from a repository of crowdsourced answers), though Mitsuku gave the short version “Just put bread in a toaster and it does it for you.”
• Only Momo answered the two Winograd schemas correctly, ironically enough by random guessing.


All transcripts of the qualifying round are collected in this pdf.

In the finals held at Bletchley Park, Mitsuku rose back to first place and so won the Loebner Prize for the 4th time, the last three years in a row. The four interrogating judges collectively judged Mitsuku to be 33% human-like. Tutor came in second with 30%, Colombina 25%, and Uberbot 23% due to technical difficulties.

Ignorance is human

Lastly I will take this opportunity to address a recurring flaw in Turing Tests that was most apparent in the qualifying round. Can you see what the following answers have in common?

No, we haven’t.
I like to think so.
Not that I know of.

Sorry, I have no idea where.
Sorry, I’m not sure who.

They are all void of specifics, and they all received perfect scores. If you know a little about chatbots you know that these are default responses to the keywords “Who…” or “Have we…”. Remarkable was their abundant presence in the answers of the highest qualifying entry, Tutor, though I don’t think this was an intentional tactic so much as due to its limitations outside its domain as an English tutor. But this is hardly the first chatbot contest where this sort of answer does well. A majority of “I don’t know” answers typically gets one an easy 60% score, as it is an exceedingly human response the more difficult the questions become. It shows that the criterion of “human-like” answers does not necessarily equate to quality or intelligence, and that should be to no-one’s surprise seeing as Alan Turing suggested the following exchange when he described the Turing Test* in 1950:

Q: Please write me a sonnet on the subject of the Forth Bridge.
A : Count me out on this one. I never could write poetry.

Good news therefore, is that the organisers of the Loebner Prize are planning to change the direction and scope of this event for future instalments. Hopefully they will veer away from the outdated “human-or-not” game and towards the demonstration of more meaningful qualities.

How to build a robot head


And now for something completely different, a tutorial on how to make a controllable robot head. “But,” I imagine you thinking, “aren’t you an A.I. guy? Since when do you have expertise in robotics?” I don’t, and that’s why you can make one too.
(Disclaimer: I take no responsibility for accidents, damaged equipment, burnt houses, or robot apocalypses as a result of following these instructions)

 
What you need:
• A pan/tilt IP camera as base (around $50)
• A piece of wood for the neck, about 12x18mm, 12 cm long
• 2mm thick foam sheets for the head, available in hobby stores
• Tools: Small cross-head screwdriver, scissors and/or Stanley knife, hobby glue, fretsaw, drill, and preferably a soldering iron and metal ruler
• (Optional) some coding skills for moving the head. Otherwise you can just control the head with a smartphone app or computer mouse.

Choosing an IP camera
Before buying a camera, you’ll want to check for three things:
• Can you pan/tilt the camera through software, rather than manually?
• Is the camera’s software still available and compatible with your computer/smartphone/tablet? Install and test software from the manufacturer’s website before you buy, if possible.
• How secure is the IP camera? Some cheap brands don’t have an editable password, making it simple for anyone to see inside your home. Check for reports of problematic brands online.
The camera used in this tutorial is the Eminent Camline Pro 6325. It has Windows software, password encryption, and is easy to disassemble. There are many models with a similar build.

Disassembling the camera
Safety first: Unplug the camera and make sure you are not carrying a static charge, e.g. by touching a grounded radiator.
Start by taking out the two screws in the back of the orb, this allows you to remove its front half. Unscrew the embedded rectangular circuit board, and then the round circuit board underneath it as well. Now, at either side of the orb is a small circle with Braille dots on it for grip. Twist the circle on the wiring’s side clockwise by 20 degrees to take it off. This provides a little space to gently wiggle out the thick black wire attached to the circuit board, just by a few centimetres extra. That’s all we’ll be doing with the electronics.

Building the neck

We’ll attach a 12cm piece of wood on the back half of the orb to mount the head on. However, the camera’s servo sticks out further than the two screw holes in the orb, as does a plastic pin on the axle during rotation. Mark their locations on the wood, then use a fretsaw to saw out enough space to clear the protruding elements with 3 millimetres to spare. Also saw a slight slant at the bottom end of the wood so it won’t touch the base when rotating. Drill two narrow screw holes in the wood to mirror those in the orb half, then screw the wood on with the two screws that we took out at the start.

Designing a head

You’ll probably want to make a design of your own. I looked for inspiration in modern robotics and Transformers comic books. A fitting size would be 11 x 11 x 15cm, and a box shape is the easiest and sturdiest structure. You’ll want to keep the chin and back of the head open however, because many IP cams have a startup sequence that will swing the head around in every direction, during which the back of the head could collide with the base. So design for the maximum positions of the neck, which for the Camline Pro is 60 degrees tilt to either side. You can use the lens for an eye, but you can just as well incorporate it in the forehead or mouth. Keep the head lightweight for the servo to lift, maximum 25 grams. The design shown in this tutorial is about 14 grams.

Cutting the head

Cut the main shapes from coloured foam sheets with scissors or a Stanley knife. I’ve chosen to have the forehead and mouthplate overlap the sheet with the eyes to create a rigid multi-layered centrepiece, as we will later connect the top of the wooden neck to this piece. The forehead piece has two long strands that will be bent backwards to form the top of the head. I put some additional flanges on the rectangular side of the head to fold like in paper craft models. Although you can also simply glue foam sheets together, folded corners are sturdier and cleaner. The flanges don’t have to be precise, it’s better to oversize them and trim the excess later.

Folding foam sheets

To fold a foam sheet, take a soldering iron and gently stroke it along a metal ruler to melt a groove into the foam, then bend the foam while it’s hot so that the sides of the groove will stick together. It’s easy to burn straight through however, so practise first. It takes about 2 or 3 strokes and bends to make a full 90 degree corner.

Putting your head together
To curve foam sheets like the faceplate in this example, you can glue strips of paper or foam on the back of the sheet while holding it bent. After the glue dries (5-10 minutes), the strips will act like rebar in concrete and keep the foam from straightening back out. Whenever you glue sheets together at perpendicular angles, glue some extra slabs where they connect, to strengthen them and keep them in position. Add a broad strip of foam at the top of the head to keep the sides together, and glue the two strands that extend from the forehead onto it. Note that I made the forehead unnecessarily complicated by making a gap in it, it’s much better left closed.

Mounting the head
Once the head is finished, make a cap out of foam sheet that fits over the tip of the neck, and glue the cap to the inside of the face at e.g. a 30 degree angle. To attach the camera lens, note that the LEDs on the circuit board are slightly bendable. This allows you to clamp a strip of foam sheet between the LEDs and the lens. Cut the strip to shape and glue it behind one eyehole, then after drying push the LEDs over it and clamp them on gently. The easiest way to make the other eye is to take a photograph of the finished eye, print it out mirrored on a piece of paper, and glue that behind the other eyehole.

This particular camera has night vision, which will suffer somewhat from obscuring the LEDs. In addition, you may want to keep the blue light sensor on the LED circuit board exposed, otherwise you’ll have to toggle night vision manually in the camera’s software.

Controlling the head

Now you can already turn the head left, right, up and down manually through the app or software that comes with your camera, and use it to look around and speak through its built-in speaker. However, if you want to add a degree of automation, you have a few options:

1. If you are not a programmer, there is various task automation software available that can record and replay mouse clicks. You can then activate the recorded sequences to click the camera’s control buttons so as to make the head nod “yes” or shake “no”, or to re-enact a Shakespearean play if you want to go overboard.

2. If you can program, you can simulate mouse clicks on the software’s control buttons. In C++ for instance you can use the following code to press or release the mouse for Windows software, specifying mouse cursor coordinates in screen pixels:

void mouseclick(int x_coordinate, int y_coordinate, bool hold) {
SetCursorPos(x_coordinate, y_coordinate);
INPUT Input = {0};  Input.type = INPUT_MOUSE;
if(hold == true) {Input.mi.dwFlags = MOUSEEVENTF_LEFTDOWN;}
if(hold == false) {Input.mi.dwFlags = MOUSEEVENTF_LEFTUP;}
SendInput(1, &Input, sizeof(INPUT));
}


3. For the Camline Pro 6325 specifically, you can also directly post url messages to the camera, using your programming language of choice, or pass them as parameters to the Curl executable, or even just open the url in a browser. The url must contain the local network IP address of your camera (similar to the underlined example below), which you can retrieve through the software that comes with the camera. The end of the url specifies the direction to move in, which can be “up”, “down”, “left”, “right” and “stop”.
http://192.168.11.11:81/web/cgi-bin/hi3510/ptzctrl.cgi?-step=0&-act=right

Have fun!
How much use you can get out of building a robot head depends on your programming skills, but at the very least it’s just as useful as a regular IP camera, but much cooler.

How to summarize the internet

An ironically long article about a summariser browser add-on.
Introductory anecdote:
Due to my interest in artificial intelligence I can’t help but get exposed to online articles about the subject. But as illustrated in the previous article*, this particular field is flooded with speculative futurism, uninformed opinions and sheer clickbait, wasting my time more often than not.

But I also happen to be an amateur language programmer, so I can do something about it. I spent years developing an A.I. program that can comprehend text through grammar and semantics, and I figured I might as well put it to use. So I had added a function that would read whatever document was on my screen, filter out all unimportant sentences, and show me the remainder. It worked pretty well, and required surprisingly few of the A.I.’s resources. Now, I’ve ported this summarisation function to a browser add-on, so that everyone can summarise online articles at the click of a button:

Download here:   banner_chrome       banner_firefox

Problem statement: Statistics are average
Document summarisers do of course already exist, and their methods are inventively inhuman:

• The simplest method, used in e.g. SMMRY, counts how often each word occurs in the text, and then picks out sentences that contain the most-occurring words, which are presumably the main topics. Common words like “the” should of course be ignored, either with a simple blacklist, or with another word-counting technique by the confusing name “Term Frequency – Inverse Document Frequency”: How frequently a word occurs in the text versus how common it is in the English language.
Another common method looks at each paragraph and picks out one sentence that has the most words in common with its neighbouring sentences, therefore covering the most of the paragraph’s subject matter. Sentence length is factored in so that it won’t just always pick the longest sentence.
• The most advanced method, “Latent Semantic Analysis”, picks out sentences that contain frequently occurring, strongly associated words. i.e. words that are often used together in a sentence are presumably associated with one and the same topic. This way synonyms of the main topics are also covered.

In my experiences however I observed one problem with these statistical methods: Although they succeeded in retrieving an average of the subject matter, they tended to omit the point that the writer was trying to make, and that is the one thing I want to know. This oversight stands to reason: A writer’s conclusion is often just one or two sentences near the end, so its statistical footprint is small, and like an answer to a question, it doesn’t necessarily share many words with the rest of the article. I decided to take a more psychological approach. Naturally, I ended up re-inventing a method that dates all the way back to 1968.

A writer’s approach to summarisation
My target for the summariser add-on was a combination of two things: It should extract what the writer found important, minus what I find unimportant. Unimportant being things like introductions, asides, examples, inconcrete statements, speculation and other weak arguments.

Word choice
While writing styles vary, all writers choose their words to emphasise or downtone what they consider important. Consider the difference between “This is very important.” and “Some may consider this important.” In a way the writer has already filtered the information for you. With this understanding, I set the summariser to look for several types of cues in the writer’s choice of words:

• Examples: “e.g.”, “for instance”, “among other”, “just one of”
• Uncertainty: “may”, “suppose”, “conjecture”, “question”, “not clear”
• Commonly known: “standard”, “as usual”, “of course”, “obvious”
• Advice: “recommendation”, “require”, “need”, “must”, “insist”
• Main arguments: “problem”, “goal”, “priority”, “conclude”, “decision”
• Literal importance: “negligible”, “insignificant”, “vital”, “valuable”
• Strong opinions: “horrible”, “fascinate”, “astonishing”, “extraordinary”
• Amounts: “some”, “a few”, “many”, “very”, “huge”, “millions”

At this point one may be tempted to take a statistical approach again and score each sentence for how many positive and negative cues they contain, but that’s not quite right: There is a hierarchy to the cues because they differ in meaning. For example, uncertainty like “maybe very important” makes for a weak argument no matter how many positive cues it contains. So each type of cue is given a certain level of priority over others. Their exact hierarchy is a delicate matter of tuning, but roughly in the order as listed, with negative cues typically overruling positive cues.
Another aspect that must be taken into account is that amounts affect the cues in linear order:
“It is not important to read” is not equal to “It is important not to read”, even if they contain the same words. Only the latter should be included in the summary.

Sentence weaving
Beside word choice, further cues can be found at sentence level:
• Headers are rarely followed by an important point, as they just stated it themselves.
• Right after a major point, such as a recommendation, tends to follow a sentence with valuable elaboration.
• A sentence ending in a double period is not important itself: It announces that the point follows.
• A question is just a prelude to the point that the writer wants to drive through in the next sentence.
• Cues in sentences that contain references like “the following” reflect the importance of other sentences, rather than their own.
• Sentences of less than 10 words are usually transitions or afterthoughts, unless word choice tells otherwise.

Along with these cues one should always observe context: If an important sentence begins with a reference like “This”, then the preceding sentence also needs to be included in order to make sense, even if it was otherwise ignorable. Conversely, if the preceding sentence can be omitted without loss of context, link words like “But”, “nevertheless”, and “also” should be removed to avoid confusion in the summary.

Story flow and the lack thereof
Summarisation methods that are based on well formatted academic text sensibly assume that the first and last sentences of paragraphs are of particular importance, as they tend to follow a basic story arc:
Introduction -> problem -> obstacles -> climax -> resolution.
Online articles however feature considerably shorter paragraphs, so that in practice the first sentence has an equal chance of being a trivial introduction or an important problem statement. Some paragraphs are just blockquotes or filler contents, and sometimes the “resolution” of the arc is postponed to entice further reading, as the entire article is a story arc itself.

But worst of all, many online articles have the dreadful habit of making every two sentences into a paragraph of their own. Perhaps because it creates more room for sidebar advertisements.

While I initially awarded some default importance to first and last sentences, I found that word choice is such an abundantly present cue that it is a more dependable indicator. Not every blogger is a good writer, after all. The frequent abuse of paragraph breaks also forced me to take a different approach in composing the summary: Breaks are only inserted if the next paragraph contains a highly important point of its own, otherwise it is considered a continuation. This greatly improved readability.

Conclusion
The resulting summariser add-on typically reduces well-written articles to 50 – 40%, down to 30 – 20% for flimsy content. With my approach the summary can not be restrained to a preset length, but a future improvement could be to add an adjustable setting to only include sentences of the highest levels of importance, such as conclusions only.

Another inherent effect of my approach is that if the writer makes the same point twice, the summary will also include it twice. While technically correct, this could be amended by comparing sentences for repeated strings of words, and ideally synonyms as well.

In conclusion, I should say that my summariser is not necessarily “better” than statistical summarisers, but different, in that it specifically searches for the main points that the writer wanted to get across, rather than retrieving the general subject matter. This may suit other users as well as it does me, and I hope that many will find it contributes to a better internet experience.
You can install free Chrome and Firefox versions from their web stores:

banner_chrome       banner_firefox

Below is an example summary, skipping trivia and retrieving the key announcement:

The most sensational A.I. news ever!

News sites are constantly oozing bold overstatements about artificial intelligence. Most scientists describe their research accurately enough in their papers, but journalism always tries to cut a slice of the Terminator movies’ popularity in order to make the science appeal to the general public. Unfortunately such calls upon the imagination tend to border on misinformation. Here is a selection of the most sensationalised news stories that made waves in recent history:

2014: Robot becomes indecisive after implementing the 3 laws of robotics
“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
 So reads the “first law of robotics” from Asimov’s science-fiction novels. Someone set up an experiment with three small wheeled robots, two of them representing humans, and a third one was provided with behavioural rules based on the above:  The robot was programmed to avoid colliding with (“injuring”) the “humans”, except to intercept them if it saw one heading towards a square designated as unsafe. When two “humans” were introduced simultaneously, the robot took so long hesitating which one to “save” that it failed to save either.


This fired up the usual flood of discussions about ethics and how to improve upon Asimov’s “laws” (Newsflash: Nobody uses them), but programmers were quick to point out that this was just poor programming: The simple “if-then” rules did not allow the robot to take more than one target into account at a time, so it just mindlessly jittered back and forth between the two. It could not make a decision because it had no decision processes to begin with.
factual source

2014: A supercomputer has passed the Turing Test for the first time
The organiser’s boast of a “supercomputer” having passed this “milestone” intelligence test was blatantly false, but all the papers ran the story without question. In reality it concerned an ordinary chatbot with keyword-triggered responses on an ordinary computer. Although this chatbot did pass “a” version of a Turing Test by deflecting questions like a zany teenager, there has never been agreement on the rules of “the” Turing Test (because there is no such thing)*.
The passing of this supposed test of intelligence was particularly insignificant because the judges were only given 5 minutes to interrogate both the chatbot and a human volunteer at the same time. This allowed for only 5 to 10 questions and so barely probed beyond the “Hello, how are you?” stage. The scientific backlash that followed cast the Turing Test into discredit and led to a number of new tests, such as the Winograd Schema Challenge*.
factual source


2015: First robot passes self-awareness test
Inspired by an ancient philosophical puzzle, three NAO robots were each given an imaginary “dumbing pill” (a tap on the head) that muted two of them, except the third robot was given a “placebo pill” that did nothing. Each robot was then asked to assess which “pill” it got, which none of them knew. But when the one robot that could still speak heard itself say “I don’t know”, it performed its analysis a second time and said “Sorry, I know now! I was able to prove that I was not given a dumbing pill”.
As cute as that performance was, this wasn’t a “test”. Every step of the procedure was pre-programmed specifically and exclusively for this scenario of pills and sound. The programmers had laid out the exact inference to execute and which outcome to conclude if a robot were to hear sound at the time that its output function activated. As that inference might as well be applied to any external object, the only connection with the robot’s “self” was the detour of audio output to audio input, and that’s a bit of a technicality. Most people’s definitions of “self-aware” include retaining a model of oneself and the capacity of reflection upon that model, and these robots had nothing of the sort.
factual source (paper)

2015: Robot attacks and kills factory worker
No laughing matter, a robotic arm at a Volkswagen car construction factory crushed a man when it swivelled, after which he died of his injuries. While Twitter was set aflare with warnings of a robot uprising, the robot arm had of course not done this on purpose. The man was a technician, who was installing the arm while standing inside the safety cage rather than outside it.
This ordinary industrial accident only gained popular media coverage because it was initially reported by a co-worker whose name closely resembled that of the leading lady from the Terminator movies, Sarah Connor.

factual source

2017: Facebook shuts down AI experiment after robots invent their own language
Most articles put it as if the AI had become smart beyond human comprehension and its creators had pulled the plug in a panic, just like in the movies.
The reality was a different story. Facebook had trained two chatbot programs to barter and negotiate over a number of items using English phrases. When they hooked the chatbots up to one another, their use of words gradually deteriorated to a shorthand where they just repeated the most effective keywords, because their programming did not include any rewards for maintaining English syntax.

A: balls have zero to me to me to me to me to me to me to me to me to me
B: you i everything else . . . . . . . . . . . .
A: balls have a ball to me to me to me to me to me to me to me
B: i i can i i i everything else . . . . . . . . . . . .
A: balls have a ball to me to me to me to me to me to
B: i . . . . . . . . . . . . . . . . . . .

This is a common flaw according to other machine learning practitioners. Since this gibberish was not useful for what they were trying to achieve, the researchers simply stopped the programs, and changed the reward parameters in their next versions.
The real reason that this got media attention was that Elon Musk and Facebook’s CEO had recently been in the news with strongly opposing views on whether AI was a threat to humanity. As such, it would have made an ironic story if Facebook’s own AI had gone out of control.
factual source

2017: Sophia the robot was granted citizenship
This story was true, but at the same time meaningless. A lifelike humanoid robot called Sophia, a creation of Hanson Robotics, was granted citizenship by Saudi Arabia at a tech conference in Riyadh. This raised all sorts of issues about human/robot rights, and many people took Sophia’s on-stage acceptance speech to be a genuine indication of her capabilities, feelings and opinions.
The truth is of course that Sophia was just an animatronic that only recited what her makers had written for her to say, in an entirely scripted interview. Sophia’s conversational subsystem actually uses AIML, a freeware chatbot scripting language that is popular for its simplicity.
Why then would the robot be granted citizenship? Well, the crown prince of Saudi is giving the country a modernisation makeover, and this announcement served as a PR signal to international investors attending the conference. As for the consequences of granting a robot citizenship, I expect there will be none at all. After all, they can just place it next to another statue and it’ll never make claim to its rights. One real consequence however is that this misleading hype got the robot banned from the World Summit AI conference.
factual source

The sky falls every day
These stories are just the highlights. The Turing Test organiser went on to claim that programs could pass the test by invoking the fifth amendment, the NAO robot programmers went on to suggest their robots had learned to disobey orders, and Hanson’s robots have made headlines multiple times for threatening to overthrow mankind. Not a day passes without some angsty story about AI making the rounds.
Regrettably these publicity stunts can have real and harmful consequences. Whenever AI became overhyped in the past, the entire field imploded as the high expectations of investors could not be met. And when the public and governments start buying into fearmongering by famous public figures, it draws attention away from real problems to imaginary ones. Most researchers are just working on practical applications and are none too happy about their work being so misrepresented.

That is why I decided to develop a nonsense filter, which you’ll find in the next article*