• Home
  • "Old school" vs. "new school" artificial intelligence

"Old school" vs. "new school" artificial intelligence

Prof. Jan Scholtes of the University of Maastricht and CSO at ZyLAB today presented a lecture at the Raad van State (the Dutch Council of State), entitled "Artificial Intelligence and Law". The lecture focuses on "Machine Learning" as the new search. The theme, also used in a series of four blogs, elaborates on the “Big Data and Data Science training” that Prof. Scholtes, together with Prof. van den Herik of the Leiden Center of Data Science (LDCS), provides to a group of 20 Dutch judges. In his lecture, Prof. Scholtes emphasizes the recent advances in Artificial Intelligence (AI). Especially compared to the first developments of AI almost 70 years ago, things are accelerating rapidly.

AI: First Steps

As early as 1950, British scientist Alan Turing described in his paper “Computing Machinery and Intelligence” (Mind 49: 433-460) an experiment in which he tried to proof that a machine could show human intelligence. In this so-called “Turing test”, a computer attempts to make a person believe that it is a human. That the computer succeeded in doing so, formed for Turing proof that a computer can be intelligent.

In the decades that followed, AI evolved with ups-and-downs. Scientists tried to solve all kind of information problems with general search methods, without the need to have domain-specific knowledge. In addition to the first experiments with Neural Networks, many other so-called symbolic AI-techniques were used to “program” human behavior as well as possible in a logical sequence of computer operations. The first computer psychologist Eliza is a good example.

 

Old School Artificial Intelligence

These developments resulted in the creation of the so-called expert or knowledge-based systems in the 1970’s. These computer systems offer a solution based on knowledge within a certain defined area, added by human experts. A well-known example is the interactive medical self-diagnosis data bank.

A knowledge-based system works with a knowledge storage and an "inference engine" that applies logical rules on this knowledge in order to derive new information and insights. The two parts work separately. This allows knowledge to be changed to solve a similar problem and the relevant knowledge can be used in another system.

A significant disadvantage of expert systems is that knowledge engineers have to introduce all knowledge as rules or program code. This is not only very labor intensive, but there is also no guarantee that large sets of rules ultimately derive exclusively consistent rules (see also the incompleteness of Godel).

 Old vs New School Artificial Intelligence - ZyLAB eDiscovery Blog

Self-learning Systems

One important requirement for a knowledge-based system is that it can 'learn'. By feeding the system with test results and by correcting errors, the system can eventually adjust the algorithms by itself and the results become more and more accurate. Today, self-learning algorithms are especially good at recognizing and classifying speech, photos and language.

 

Databased Predictions

Self-learning computers are all about the extraction of knowledge by learning from data using text-mining, predictive analytics, so-called "predictive coding" or "machine learning". Text mining is the combination of various advanced mathematical, statistical linguistic and pattern recognition techniques that automatically analyze unstructured information to extract relevant data and make a text better searchable.

With text mining we can search on a higher linguistic level than on keywords alone. This way we can search without knowing exactly what to look for.

Machine learning means that you learn a machine to discover patterns and connections in large datasets. A classification system is trained with so-called training data. New data is then classified based on (latent) patterns discovered in the training data. Thus you can train a computer to organize and analyze documents. After enough training, it is possible to predict the behavior of new data.

Self-learning computers go way beyond just recognizing and segmenting. Google, FaceBook, Bol.com and Netflix also recommend other products based on the information we provide.

 

"New School AI" for the Judiciary

In the judiciary, self-learning computers are used extensively for fast search and analysis of large amounts of textual data. By "reading", analyzing and organizing case files or jurisprudence, the computer can extract the core data and start to reason with it. Based on the found arguments, a computer is perfectly capable to predict the verdict in certain legal cases, select the best lawyer and, according to Prof. van den Herik, “a robot will soon be able to replace a judge”.

 

Self-learning Computers (But Now for Real)

The breakthroughs in AI follow each other rapidly. Last year, AlphaGo beat the world champion in Go, an old Chinese game that until that moment was assumed impossible to learn to a computer. AlphaGo learned the game by analyzing thousands of games, assisted by human experts.

The new version AlphaGo Zero is completely self-learning. "Learning from Scratch" mention the makers DeepMind (part of Google) on their website. By pure experimenting and learning, AlphaGo Zero defeated her predecessor with 100 - 0. And the AI from Deepmind also proofed to be able to play chess. Within a few hours, its algorithm outperformed that of the most advanced chess computers in the world.

 

Challenges Beyond Games

The recent successes of AI are mainly found in the field of games. With games, it is clear when the computer is right or incorrect: winning is good and losing is wrong. Because of this, we can program the computer to play against itself and generate an infinite amount of training data. This is exactly what Google DeepMind does: playing tens of millions of games. This is not a new idea. In the movie War Games from 1983, the computer after endless simulations finally concludes that a nuclear war only knows losers.

Unfortunately, not everything is a game. And in the judiciary we cannot ignore that there is simply too much data in various legal applications. Every search, no matter how good, gives too many documents to view. You never know exactly what you get or what you miss. You do not know exactly what to look for and make mistakes or deviate. Besides, searching data is time-consuming, boring and tedious work.

We humans have cognitive limitations when processing large amounts of data to gain insights from them. We are simply not suited to successfully synthesize large volumes of data.

Support from AI in the organization, analysis and interpretation of the facts in such large data collections are needed to support us. Only with the use of AI-technology we can reduce the workload and improve the quality of the judiciary. The recent successful developments in various AI-areas, will soon help Legal Tech professionals to solve increasingly complex problems in the field of case law. This means we can hand over ever more boring work, reduce costs and increase the quality of the work.

All thanks to computers that have learned how to play games on their own!