Robothand with question mark | Blog
AI & RPA
 

Artificial intelligence as a door opener? Expectations vs. reality

Table of contents

 

Artificial intelligence (AI) here, apocalypse there: this term is a popular topos in the media as it polarizes laymen and experts alike. AI is often seen as an absolute driver of innovation and a possible cure-all for all kinds of problems but just as often – a sort of antithesis to this – as a danger to humanity (Stephen Hawking, Elon Musk and the like send their regards).

However, both positions are not quite in line from today's perspective and the current state of the art. Toby Walsh, Professor for Artificial Intelligence at the University of Sidney, warns in an article that technology, including all areas of AI research and application, is basically morally neutral. It is up to us as humans to impart our moral values to an AI. In the end it depends solely on us which ones these are.1 The autonomously learning chatbot »Tay« is an excellent example but more on this later.

At present, AI often still flourishes in obscurity and therefore unnoticed by many users, e.g. in the control of robot arms in the automotive industry, in medical assistance systems, in toy robots or in the cloud of AFI Solutions when training invoice documents. AI systems speculate in stocks, write sports news or film scripts2, correct grammar mistakes or search in huge medical databases for new drugs and treatment methods across the globe. But despite all the advance praise, AI projects are likely to receive in advance: they have not been successful everywhere according to an IDC study.  Failed experiments are common and exist across all industries and objectives. People also play a decisive role.

So what can we expect from an AI?

 

Artificial intelligence is not smart per se

That AI systems or intelligent machines, as they are currently used, are capable of taking over global dominance must therefore be strongly doubted (unless you are a conspiracy theorist). To date, they are still depending too much on their human teachers who feed them questions, objectives or their purposes. These teachers also feed them selected data sets, strongly bound to the task, with which they are then allowed to train within narrowly determined system and parameter limits. If the scope is too ample for the KI, the results will be difficult or impossible to evaluate. When it comes to the results, the human being, the so-called data scientist or analyst specializing in data, is again needed to interpret the results of the AI and make them usable.

An example: AlphaGo, an AI developed by the Google company DeepMind, specializes in the East Asian board game Go. This AI was the reason why a former Go champion resigned after his defeat against the »Machine« in 2016. However, outside the world of Go and its rules, this AI would get nowhere. It is much too dedicated – also limited by the database it could train with. In the world of the Go game, the AI is indeed probably unbeatable (as long as the contenders are human and there are no software bugs to slow it down).4
 
When looking for AI projects, today's cases of application are rather sobering or at least strongly bound to a specific scope of application – especially in the B2B environment. Other examples besides AlphaGo are the two AI systems developed by IBM: »Deep Blue« (that won a game of chess against Gary Kasparow in 1996) and its advancement »Watson« (that won a game of Jeopardy against two human opponents in 2011) are only »intelligent« within a certain set of rules or with correspondingly trained data sets. More complex tasks are still difficult for artificial intelligence to solve. B2B projects often lack resources and the amount of data required for comprehensive training.

The competitors from the B2C sector around the voice assistants Alexa, Siri and Cortana have a clear advantage: theoretically billions of user data are at their disposal at any time to train, improve and adapt their answers to the respective individual user behavior. However, the prerequisite for this is that they are allowed to listen in and read data anywhere and at any time – a worrying and dangerous development as not only privacy advocates admonish.

 

AI can also be different

Yes, artificial intelligence is in vogue and a buzzword: advertising suggests that every device should be intelligent in the best case – at least a little bit. It seems to make no difference whether we are in the B2C or B2B sector.

But in this hyped research segment, there are some AI projects with rather disturbing results. In a communication experiment by a well-known social media giant, two AI-supported chat bots suddenly started talking in a »secret language« that human researchers could no longer understand. For this purpose, the two bots developed a more efficient language from human terms in order to be able to negotiate more quickly. However, languages optimized by AI are not uncommon in research. The secret language was also not the reason why the experiment was stopped. Since the AI was to interact with humans in its later field of application, it had to use human language for this purpose of course. However, the scientists had failed to establish this decisive restriction for the two bots.5

Get to know now!
Artificial intelligence in use for fast, digital document recognition.


In another example, the AI took an even more aggressive approach: the test set-up specified that two AI systems were to collect apples which were available in different quantities within a test period of several thousand rounds. As long as there were enough fruits for both sides, the AI would cheerfully hoard it. But as soon as there was a shortage of apples, the AI used their lasers to paralyze the other side for a short time and thus get to one of the few apples. The two AI were free to decide whether to use the weapon or not. In most cases they decided to do so. The researchers found the more aggressive, selfish approach more intelligent than just waiting passively even though this would have given both AI the same number of apples. Both intelligent programs were thus trained to act aggressively. So in the end the human being is responsible for the character of an AI.6

Taking the courage to experiment with AI

In the end, the above examples are experimental set-ups under laboratory conditions in which both the options and limitations of intelligent systems and machines are explored. And these experiments are important because they often reach surprising results some of which demonstrate our own thought processes and human weaknesses. This was shown, among other things, by the example of »Tay«, a Microsoft chatbot learning from his previous conversations, taught by internet trolls, to behave in a misogynistic, antisemitic manner and to be sympathetic to Hitler.7 At this juncture, there is only one thing to say: whoever sows hatred will reap a despotic AI with a doomsday fantasy!

Back on topic: due to these many parallels to human thought processes, areas of neurology, philosophy or psychology are also very important for AI research as there is a defined task for any kind of artificial intelligence: it is supposed to empower machines to independently initiate learning processes, to react adequately to new information and to perform tasks that require humanlike thinking and a human canon of values. For this purpose, research and experiments are being conducted in areas of computer science with, for example, regression analyses, clustering, multivariable processes, deep learning in digital neural networks or natural language processing, i.e. the processing of natural human language input.

The so-called »Turing Test« (named after the mathematician, computer scientist and cryptologist Alan Turing ) has not yet been mastered by artificial intelligence. In this test, a group of experts receives messages, e.g. via chat, from someone they cannot see. They then have to decide on the basis of the messages received whether they come from a human or an intelligent machine. However, there are also dangerous developments in this context such as the so-called deep fakes in which it becomes difficult even for video experts to distinguish a real video from a video with statements put digitally into the mouth. Even an AI trained with countless documents composes pieces of writing that look so real that they could be assigned to any author. Both examples show how important it is to handle such technologies correctly. But the responsibility lies with us again.9
     
The training of AI-controlled systems, comparable to the human learning process, plays an important part. This is how it is decided whether or not an intelligent machine performs well under the given conditions. For this purpose, the technologies mentioned above, such as machine and deep learning, natural speech recognition etc. are deployed to detect recurring patterns and to arrange them according to certain aspects and probabilities – in other words: the AI learns. But only when a correspondingly large amount of data is available for training can reliable paradigms (with correspondingly high matching probability values) be identified in this data which in turn can be interpreted. This is where the process of understanding or autonomous learning begins. And at this juncture, many companies unfortunately often lack the courage, the stamina or simply the resources to simply let the AI do its work so that in the end they can be surprised by unexpected results.


 

Who adjusted the new VAT values?

Speaking of surprises: the new VAT rates have been in force since 1 July 2020. As a consequence, companies had to adjust their rates from 19 to 16 and from 7 to 5 percent. Following these changes, companies now have to consider five different tax rates (plus the 0 percent) in some cases, depending on whether the invoices were issued before or after the adjustment. This definitely sounds like new efforts in invoice processing both for the companies and for their service providers.

In this regard, Bernd Kullen, AI expert at AFI Solutions, has observed a surprising phenomenon in the cloud service of AFI Solutions: “In the cloud-based AFI DocumentHub, documents are processed by means of our AFI AI. We noticed that the current VAT rates have already been recognized correctly at our customers, although the addition of the new rates has not yet been made at all in the recognition.”

How could this happen?

Was there artificial intelligence at work?

Bernd Kullen explains: “As a result of having our AFI AI in use in the DocumentHub, there are candidates for many invoices that can also be used for the recognition of invoice totals. It is not content that is stored there but geographic information – i.e. what is where on the document. Thus, the contents of a trained value can differ or change which is exactly what led to this automatic adoption of the new VAT rates. A plausibility check of the read sums ensures that the values recognized via the AI match mathematically. This makes manual adjustment of the tax rates to be recognized practically superfluous.”

A surprising result: no one told the AFI AI to do this but via the automatic document training in the AFI DocumentHub it simply taught itself. It is also surprising that all this did not take place in a laboratory under experimental conditions but »in the wild«.

Mr Kullen also sees some potential for optimization in his work with the AFI AI for sure: “When working with an artificial intelligence for a longer period of time, one will only then become aware of how quickly human beings are able to grasp certain facts, for example on an invoice, as soon as they skim through the document and judge them as correct or incorrect – and how difficult this can be for an AI with its existing tools. At this point, many customers are often disappointed by the results of the AI at first because they expected more from an artificial intelligence beforehand.”

But due to such results, artificial intelligence gains a raison d'être and at the same time urges arguments for being deployed by other customers in other products and fields of application. It is not without reason that AI is considered a key technology for opening doors to future developments such as self-driving cars or intelligent operation robots.
Bernd Kullen: “We at AFI Solutions are eager to see how our AI will develop and are looking forward to the next surprises. My advice to anyone who wants to work with AI systems is: have the  courage to experiment and also the courage to fail – do not give up too quickly.”

Marian Spohn

Editor

Don't miss a newsletter anymore!

Register now and benefit regularly from useful practical tips, information on new products and existing solutions, interesting field reports and early bird discounts on our events. We will keep you up to date and all this without obligation!


*Mandatory Fields

x

Oops, unfortunately your browser is outdated!

Our brand-new website was developed for the latest technologies and is therefore not fully available for outdated browsers (e.g. Internet Explorer).

To ensure that our pages are displayed correctly, work properly and are really fun to use, we recommend that you install one of the following browsers:

  • Google Chrome
  • Microsoft Edge
  • Mozilla Firefox

Your AFI Solutions

x