What Is Neural Matching? Google Just Changed How You Search the Web

A spiral of stained glass windows forms a pattern.

Neural matching is one of the most misunderstood algorithms among Web marketers. It is based on long-used image pattern analysis methods used to overlap satellite photos on maps.

It’s only been a few days since Google revealed they have been using a neural matching algorithm to modify their search results.  While I wrote about neural matching for the SEO Theory Premium Newsletter this week, I haven’t said much about it openly.  Although my education in computer science introduced me to artificial intelligence many years ago, my professional work has only occasionally wandered close to the topic.  I’m hardly an expert in artificial intelligence but I understand a lot of the research that has been published in the field.  As someone who studies large systems, the searchable Web ecosystem, and search algorithms in general I spend a fair amount of time reading technical papers that discuss artificial intelligence systems and algorithms.  It’s not a topic about which I wax loquacious, as a friend of mine would put it.  But Google’s admission to using neural matching in recent algorithmic improvements caught my attention.

This is not a technical explanation of artificial intelligence or how neural networks are designed.  I will do my best to keep the jargon to a minimum.  And I’m not going to link to any sources for you because, well, if you read those types of documents you know where to find them and if you don’t, you probably won’t read them anyway.  Besides, you should try Google’s new algorithm to see what it finds for you.  They say you don’t have to know the exact words you’re looking for.  That’s what really piqued my interest.  I’ll come back to that further on in this article.

Neural matching is one of those “artificial intelligence” concepts that goes back a few decades. (Mathematicians began developing artificial intelligence algorithms at least as far back as the 1940s.)  Like so many concepts in artificial intelligence neural matching originated with physiological psychology, where the concept was used to describe how memory works.  Neural matching was more of a black box idea, originally.  We still don’t really understand how the animal brain works (humans are not the only animals with memory, intelligence, cognizance, etc.).  We know that neurons make up the physical network nodes of a brain, but we’re still deciphering how all that works.

A typical higher order brain (and this includes the brains of most mammals) is a superfast neural computer.  A brain regulates body functions (including breathing, heartbeat, chemical activity, etc.) and processes sensory data (visual, aural, tactile, oral, and olfactory).  The brain does all that continually.

And the brain thinks.  It thinks on a conscious level and a sub-conscious level.  But what is thought?  Do computers think or are they merely executing autonomic processes?  Artificial intelligence systems are not as sophisticated as science fiction makes them out to be.  We don’t have anything like Tony Stark’s virtual assistants (J.A.R.V.I.S., who was incorporated into the Vision’s personality, and F.R.I.D.A.Y., who replaced J.A.R.V.I.S.).  Nor do we have anything like the Star Trek computers that engage in freeform conversations with their users.

Mobile and voice assistants like Amazon Echo, Apple Siri, Google Assistant, and Microsoft Cortana are capable of interacting with us but they’re not yet ready to go off and design complicated functions the way a Star Trek computer does.  Amazon recently announced it is their goal to give the Echo the ability to do this.  That will be more easily said than done.

True Artificial Intelligence Must Be Creative

For many years, as a computer programmer developing business applications software (“accounting systems” and “software tools”), I helped to design or modify or support applications (programs or systems of programs) that could write other software.  We also developed self-modifying programs.  That all sounds a bit terrifying, doesn’t it?  These were not artificial intelligence systems.  They were report writing tools, application program development tools, and similar “simple” programs.  We business application programmers had written so many similar programs through the years it became standard practice at many companies to develop tools that would take templates for common programming functions and use them to generate new functions.

For example, if you added a new set of data files to your sales tracking and reporting system, it was pretty easy to write “file maintenance” programs and reports for them.  So it was only a small step further to write programs that generated these simple programs.  Using software to write software saved a lot of time, reduced human errors, and ensured a lot of consistency across large-scale application systems.  We didn’t have any of these “sprints” that so-called “agile development” uses today (it seems rather inflexible and inefficient to me, but that’s another story altogether).  When you had to bring a complex system online in 3 months you couldn’t afford to wait for incremental development to catch up with the needs of the user.  You developed a complex system in 3 months.  Period.

Artificial intelligence cannot do that.  That is why we still have systems analysts, programmers, and “developer guys”.  There are probably several times as many people writing original code for computers today as there was in 1990.  And maybe I am underestimating the total number of “coders” versus old-school “programmers”.

To compete with that much productivity, artificial intelligence systems would have to:

  • Identify problems to be solved, or tasks to be performed
  • Understand how to solve or perform those tasks
  • Develop and test the solutions
  • Integrate the solutions into existing software infrastructure

Computer scientists have been experimenting with methods and models for accomplishing all these tasks for years.  In the 1990s I wrote a short story I called “Simply Connected” where a computer programmer answered a support call from an old client.  The client’s computer system had been taken over by malicious software (yes, that evil practice goes back at least to the 1960s).  The software had a built-in security protocol that could detect (what today we call) malware and restore the system’s installed code to a prior state.  This is kind of what Microsoft Windows will do for you today, but it’s a clunky, interactive process.  You have to trigger the restoration.  And it requires a lot of storage capacity (disk space).  In my story, all that was somehow cleverly managed by a bit of code that used far less disk space.

Although we have used software that can rewrite itself for decades, the implementations have been limited.  Most of the examples you’ll hear about today are malicious programs that erase themselves once they have infected your system and done whatever damage they are designed to do.  An artificial intelligence that is truly creative would have to be capable of rewriting itself.  That is, the software would have to implement whatever it learns from its rigorous testing without human intervention in order to truly emulate animalistic brain function.

Technically, we can develop systems to do this.  So why don’t we write that software?  I think the answer is obvious: we can’t control it.  But, technically, we don’t yet have a need for that kind of software.  If we did it would already be available in the commercial software markets.

Creativity stems from learning.  You learn how to manipulate ideas other people have shared.  You learn how to replicate experiences using different stimuli.  We do this (in our brains) by modeling ideas (thinking ahead) and experimenting (anticipating or imagining how these new ideas should work) and finally by implementing our experiments (trying out our imagined experiments).

That sounds simple.  But do you have any idea of how many millions of individual commands it would take to control another human being to, say, make a piece of pottery that uses a new design for decoration?  Maybe the number of commands would be on the order of billions, not millions.  The mere act of designing pottery is not that difficult.  We have used software to do things like this for decades.  But once you decide to translate the machine-created design into a set of coordinated actions things become complicated.

To simplify the process of implementing ideas expressed as commands in the form of governed actions we turn to single-purpose machines.  The human body processes a large number of simple commands to type a single word.  That is because we use a multi-functional machine (the human body) to perform that task.  A typical desktop printer cannot do much more than scan documents, print them out, and maybe fax them.  That’s orders of magnitude simpler than typing the word “the” into a computer.

A futuristic artificial intelligence would have to go well beyond learning from experiences and imagining new ideas in order to be effective.  The AI would have to be able to instruct something, some peripheral device, in the most minute details about how to implement the new ideas effectively.  Using today’s technology that literally requires a data center (a lot of computers).

So if you’re wondering when our robot overlords will be ready to take over the planet, that day is still far off.

Primitive Artificial Intelligence Doesn’t Have to Be So Creative

The definition of “artificial intelligence” is very flexible.  It includes many different concepts, including the concept of self-writing or self-moderating software.  Those would be atomic functions, primeval capabilities that are required for higher level tasks.  A higher level task might include identifying a simple problem, such as extracting an idea from a large body of documents.  A higher level task might also include generating “experiments” that lead to the successful extraction of ideas from a large body of documents.

Search engines are relatively simple applications.  You can argue about how many “signals” or “factors” a search engine uses for rankings and filters but at the end of the day all we’re asking a search engine to do is to find stuff for us.  And we’re only asking it to search online files.  Online files are stored in a limited number of formats.  The sensory input a search engine requires is limited to one medium (albeit one which you and I don’t include among our traditional five senses).

Search engine technology arose out of a need to organize human information.  We called that discipline Information Retrieval and information retrieval branched out into several directions very quickly during the 1960s, 70s, and 80s.  For example, when you walk into an auto parts store and ask for a part, the clerk at the counter might look at their inventory program to see if the part is in the warehouse.  Or you could type the part number into a search box at Amazon or Walmart’s Websites and see if they have it.  These are simple information retrieval tasks.

Document management systems were developed to handle scanned documents in the 1980s and 1990s.  The business world had accumulated warehouses full of paper documents.  The paper documents were hard to search.  They were gradually deteriorating because paper rots, their storage facilities allowed moisture to contaminate them, bugs got into the boxes, and fires occasionally destroyed old records.  Computer scientists (and just ordinary programmer guys like me) developed methods for scanning documents electronically, storing those images, and tracking those stored images through traditional text-information databases.  The databases had to be searchable, of course, and eventually people demanded that the scanned documents be searchable.

Replacing the age-old task of sending some clerk into a warehouse to search through stacks of boxes filled with paper required a lot of human ingenuity, but we did it.  I used to work for a software firm that developed optical character reading software (I was not involved in that project).  They had to use primitive artificial intelligence to improve the OCR technology.  Today, you can download OCR software off the Web for free.  And it’s probably better than the stuff I saw in action in the 1980s and 1990s.

Converting a digital image into text-based data allows you to create a searchable database.  But this is an inefficient approach to searching for information.  The human brain doesn’t work that way.  When you pick up a piece of paper or a book and search through it for information, your brain is scanning (and interpreting) digital images (or maybe analog images – we’re not really sure).  All we ask of computers is that they do what the human brain does, at least when it comes to searching for information.

Neural Matching Was First Applied in Optics

Optics is a fairly robust field of investigation, study, and technology.  Suffice it to say that in technology optics consists of all those things we do to replicate human vision and visual processing.  We want our computers to “see” documents (we begin by scanning them) and to “recognize” those documents as documents, and to be able to manage those documents the way we do.  We want computers to read the documents, store the information extracted by reading the documents, and retrieve it when we need it.

It sounds simple but it’s much more complicated than that.  Of course, not every document we need computers to read and understand is an old invoice or a book.  Maps and photographs were utilized in early optical analysis systems.  Government agencies (military and civilian intelligence groups) needed to analyze satellite or aerial photographs and extract information from them.  A lot of early optics research came out of the intelligence community.  Or maybe I should say that a lot of early academic research into optics was paid for by the intelligence community.

One fundamental task in the field of optics is the ability to identify an image.  You want to compare that image to a database of images.  In this way you may be able to determine that your spy photograph is correctly documenting activity in a known region.  Or maybe you want to know what a thing is in a spy photograph.  One of my old friends used to tell a story about a photograph one of his relatives showed him in the 1970s.  It was a blurry image of a set of numbers on a metallic looking plate.  The friend’s relative asked him what he thought that image represented.  All guesses were wrong.  The numbers were recorded on a small identification tag on a briefcase sitting in a vehicle in Russia.  The picture had been taken by a spy satellite in the 1970s.

Is the story true?  I don’t know.  I have no reason to doubt it.  But there are also published news stories that claim the CIA (or maybe it was the NSA) had a computer chip in the late 1970s or early 1980s that could process a billion pieces of data (probably bytes) in under a minute.  The government needed this technology to scan news stories from all over the world to identify things of interest to the intelligence community.

So when we needed to extract information from satellite images and maps, we developed optics software to do this for us.  That software created digital images that replicated in computer storage (hard disks) and memory enough information about pictures to recreate them on computer screens.  The digital scanning technology was based on camera technology.  Cameras break pictures down into small dot-like elements.  I always thought that old-fashioned cameras took images in a continuous format, like when you use a pen to draw a line on a page.  A friend took me into a dark room one day and showed me how to develop pictures.  He used a special magnifying glass to examine the “dots” that formed the picture he had just developed to make sure the image was in focus.

So you know how newspapers and magazines print pictures from large fields of dots?  That’s how the analog pictures coming out of the dark room looked – only the dots were smaller and far more numerous than what the newspapers and magazines were using.  So replicating an image for computer storage only required that we divide the image into a number of dots (“pixels”) and record some attributes about them (size, color, intensity, etc.).

That creates a stored image, but how do you understand it well enough to compare it to another image?  This is where NEURAL MATCHING comes in (I didn’t forget about it – you just need to understand why we had to develop neural matching in the first place).  Early attempts to replicate how an animal brain thinks were stymied by relatively simple ideas. For example, how does memory work?  It doesn’t work the same way a hard drive stores data.  So far as we know, there is no binary language of brains.  A single neuron may connect to tens of thousands of neurons and pass along small parts of information via electrical pulses.  But what do those electrical pulses mean?  We’re still trying to figure that out.  So far we have been able to interpret large bursts or bundles of electrical pulses, effectively enabling mind-to-mind communication (or mind-to-computer communication), but we couldn’t even do that in the 1980s.

The field of optics turned to the field of mathematics to develop neural matching.  We had all these scanned images stored in digital format.  We still save digital images the same way today (although we now use many different formats to accomplish this).  It was obvious that since computers are governed by basic mathematics that there must be a mathematical way to compare two images to each other.

The solution came from vector-based math, or Linear Algebra.  A vector is a simple list of things.  In math you may think of a series of numbers like 1, 2, 3, 4, 5, etc. as a vector.  But a vector can consist of anything.  If you add a group of rules for things that you can do with the vector you create a set.  Optics naturally gravitated toward managing (and manipulating) the details of digitally scanned images through vectors.  By breaking down complex images into large groups of vectors of constituent data, computer scientists realized they could search for patterns between vectors.

In a nutshell, that is what neural matching does: compares vectors to vectors.  The purpose of neural matching is to construct a pattern from one set of data and compare that pattern to another set of data.  And as far as I know, this began with optics.  It moved on to fingerprint analysis.  I don’t know of any other early applications for neural matching in computer science, but admittedly this is not my field of expertise.  Still, neural matching is based on vector analysis.

So Google Likes Vectors, Right?

A table of search phrases organized as word vectors.

The Google RankBrain algorithm converts the query you type into the search box into a vector. The query vectors are semantically analyzed and compared to previously stored query vectors.

If you’re thinking about how Google might be utilizing vectors to manage its search engine, you should probably remember that in 2015 they announced RankBrain.  RankBrain is a vector-based semantic analysis system that compares a query you just typed into Google to other queries (previously typed in).  RankBrain “suggests” to the main search algorithm that the results for a previously resolved query may suffice for your query.  The main algorithm decides whether to use or ignore that suggestion.  This is a simplistic explanation of a system that Google has kept very secret, but a lot of the research that led up to RankBrain has been published in the public sector.  You can read the work by the team that developed Word2Vec and RankBrain and see where they came from.

Queries are just vectors of words to a search engine.  So is the text in a document.  This sentence you’re reading occurs after a little more than 3,000 words.  Those 3,000+ words constitute a vector.  But that vector is too large for many text-searching tasks.  Search engines like Bing and Google take the text in a blog post and break it down into many smaller vectors (rather like the digital image vectors are broken down into smaller vectors).  These small text-based vectors are used to identify things like phrases, the names of people and places, concepts like how to pour a glass of water (not as the phrase, but as the idea itself), and the idiomatic style you use when you write.

Since Google was already using vectors and manages a huge database of these mini-vectors of text, it seems logical that sooner or later someone would think to use pattern analysis (neural matching) to analyze the vectors.  By identifying patterns in language we can compare them to other patterns from language.

The Google Neural Matching Algorithm Impact

And so Google says that about 30% of today’s queries are processed by this neural matching algorithm.  But what does that mean?  Danny Sullivan, who represents Google as their Search Liaison, wrote on Twitter that Google is treating blocks of text as sort of “super synonyms”.  I inferred from the context of his tweet that he was just repeating what was said in the Google live presentation (I think a video is now available online).

I’m not sure “super synonyms” does the concept justice, but I can see how it would convey to lay people the idea that the search engine isn’t merely looking at “words replacing other words”.  These “super synonyms” are more like expressions of ideas replacing other expressions of ideas.  It’s a truly semantic gesture.

You could talk about “capital of France” or you could talk about “Paris”.  You’re talking about the same place, though, right?  The team that developed RankBrain and Word2Vec figured that out.  So imagine taking that technology and applying it to matching patterns between queries and documents.

RankBrain matches patterns between queries.

Google’s Neural Matching 2018 algorithm matches patterns between queries and documents.

Both methods should reduce the amount of processing (and resources) required to resolve search queries.  But whereas RankBrain probably saves a lot of processing time and power, this new neural matching algorithm improves the accuracy of query resolution.  It does so by looking at more documents than the old “we need these words in the document or the anchor text” way of looking at things.

Google Was Already Doing Something Like Neural Matching

If you have used Google Translate you have already used a system that matches patterns of words to other patterns of words.  Google Translate built its engine and database (in part) from user-submitted translations.  The user-submitted translations included individual words, phrases, and documents.

We don’t have to know if Google Translate was using neural matching to understand that they were translating phrases in one language to phrases in another language.  This is functionally equivalent to the old optics-based neural matching algorithms.  A English phrase is a vector.  An equivalent phrase in Spanish or Romanian is also a vector.

Sooner or later, someone at Google put 2 and 2 together and came up with a way to compare phrases to phrases between queries and documents.

So what does this mean for search engine users?  It sounds to me like you no longer have to revise your queries as much as you used to.  Time was, if you searched for “canine food” on Google you’d get some really obscure documents because, frankly, most people don’t write “canine food”.  In English the majority of people (including the Websites selling the stuff) write “dog food”.  Today if you search Google for “canine food” they’ll show you many listings (including advertisements) for documents that only mention “dog food”.  This is what semantic search is all about: understanding, from one phrase, that you’re talking about a concept represented by many different phrases.

“Dog food” may not be the best example.  Ben Gomes, the Google lead engineer who announced this new neural matching algorithm, used the query “why does my television look strange” as an example.  Somewhere in the top search results you should see an article from CNET that describes “the soap opera effect”.  The word “strange” doesn’t occur in the article.  Nor does the phrase “why does”.  Nor does “my television”.  In other words, Google’s neural matching algorithm took your query, converted it into a semantic pattern, and then compared that semantic pattern to other semantic patterns extracted from billions of Web documents.

A chart illustrating how a universal language could be used to translate concepts between different forms of storage or representation.

What is neural matching? It’s pattern matching.  Neural matching can be used to convert ideas or concepts represented textually, visually, aurally, etc. into a single common format and then into other formats.

The pattern vector is (I am guessing) stored in some internal format.  Call it a concept grammar, a sort of universal language that allows Google to map queries written one way to answers written another way.  They said they started doing this for multi-language search years ago.  How did they accomplish that?  I don’t know.  Vector analysis could have been used, or they might have used a more primitive method at first.

Some law enforcement and intelligence agency applications use blindfolded data search applications that work in the same way.  A searcher at one agency enters a query into the system.  The query is translated into an intermediate format and passed to a query engine at another agency.  The second agency’s query engine translates the query into their own format and searches their database.  Any relevant information records retrieved are then translated into the intermediate format and passed back to the query engine.  The searcher receives the information as translated into their own system’s format.  In this way, the two agencies don’t disclose critical infrastructure information to each other while sharing data.

I would not be surprised to hear that Google Translate is using vector analysis and neural matching today.  I am pretty sure their translations are much better this year than they have been in previous years.  I’ve been able to engage in more real-time online discussions in other languages thanks to Google Translate than I have in years past.  Maybe that’s just me improving my ability to use Google Translate.  Maybe Google Translate just has a better dictionary of phrases and words.  And maybe they improved the whole system.

Neural Matching Could Improve Text-to-Image Searching

Google also announced that they have improved image search.  I’m not thrilled with their image search, never have been, but I cut them some slack because most images are poorly described and/or annotated.  In fact, most people don’t include any meta information with their images at all.

Traditional image databases require a lot of meta data.  Some has to type all that data into the system.  The Web is a huge information repository but when it comes to digital content it’s just a mess.

We’ve known for years that Googlers have been experimenting with artificial intelligence that analyzes images.  There have been some spectacular failures in this area.  But neural matching is the obvious way to go with image analysis.  As Google and other image-analyzing computers improve their ability to decompose digital images into smaller, meaningful vectors, they’ll be able to generate their own labels (meta information) for those sub-vectors.

In effect, Google can create its own image annotation language.  And if they can create an image annotation language, they can translate those annotations into a universal conceptual language such as the hypothetical concept grammar I describe above.  Is Google doing this?  I have no idea.  But if someone were to ask me to develop a system capable of matching text-based queries to images, and I had these technological capabilities to work with, that is probably how I would approach the task.

Conclusion: So What IS Neural Matching?

In a nutshell, neural matching is a simple method for comparing unlike things in a common format or language.  The problem in optics was that you might have two pictures of a document or a geographical area that looked similar to the human eye, but because of differences in details a normal pixel-by-pixel comparison would fail to produce a valid match.  You couldn’t really depend on exact-match comparisons working.

Neural matching supposes that if you create a collection of smaller patterns that, taken together comprise a larger pattern, you can more accurately identify two similar images (or groups of words) without requiring an exact match.  So you could take a picture of part of a document or a map and use that picture to search a database of stored images to find a larger image of the same document or map.

This would work in facial recognition technology, too.  Neural matching uses vectors small enough for easy comparison but large enough to contain sequences or patterns of data that are relatively unique.  You may see the same patterns occur in a lot of different things, but as you string more patterns together you narrow the field of things that could be possible matches.

When we type a query into a search engine, we’re using a relatively simple mode of expression to convey a lot of information.  Search engines now take other data into consideration, including previous queries we just typed in, our physical location, and more.  This meta information for each query can help Google identify the concepts we’re interested in better than ever before.

And by translating our queries into something like my proposed concept grammar, Google seems to be able to find documents that match what we’re looking for with greater ease, perhaps in less time, and in a more satisfying way.

Maybe one day we’ll understand what Google is doing better than we do today.  But for now, I think I’ve got a pretty good idea of where they are going because they seem to be building on top of generations of developments in vector-based analysis.